codelion commited on
Commit
6940376
·
verified ·
1 Parent(s): 213a1f5

Update dataset card with correct schema

Browse files
Files changed (1) hide show
  1. README.md +51 -30
README.md CHANGED
@@ -10,19 +10,40 @@ dataset_info:
10
  features:
11
  - name: text
12
  dtype: string
 
 
 
 
 
 
 
 
13
  - name: url
14
  dtype: string
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  splits:
16
  - name: train
17
- num_bytes: 22131652
18
- num_examples: 4913
19
- size_categories:
20
- - 1K<n<10K
21
  ---
22
 
23
  # FineWiki Sampled Dataset (10,000,000 tokens)
24
 
25
- This is a sampled subset of [HuggingFaceFW/finewiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki) containing approximately **10,002,154 tokens**.
26
 
27
  ## Dataset Details
28
 
@@ -30,14 +51,11 @@ This is a sampled subset of [HuggingFaceFW/finewiki](https://huggingface.co/data
30
  - **Original Dataset**: HuggingFaceFW/finewiki (English subset, train split)
31
  - **Sampling Method**: Reservoir sampling (unbiased random sampling)
32
  - **Target Token Count**: 10,000,000 tokens
33
- - **Actual Token Count**: 10,002,154 tokens
34
  - **Tokenizer**: GPT-2 (50,257 vocabulary)
35
 
36
  ### Sampling Statistics
37
- - **Documents Sampled**: 4,913
38
- - **Documents Processed**: 4,913
39
- - **Tokens Processed**: 10,002,154
40
- - **Sampling Rate**: 1.0000
41
  - **Random Seed**: 42
42
 
43
  ### Sampling Method
@@ -51,10 +69,8 @@ This dataset was created using **reservoir sampling**, which ensures:
51
  The sampling algorithm:
52
  1. Streams through HuggingFaceFW/finewiki without downloading
53
  2. Uses GPT-2 tokenizer to count tokens per document
54
- 3. Maintains a reservoir of documents until target token count
55
- 4. For each new document, replaces reservoir items with probability k/n
56
- - k = reservoir size, n = total documents seen
57
- 5. Guarantees uniform random sample across entire dataset
58
 
59
  ## Usage
60
 
@@ -67,13 +83,28 @@ dataset = load_dataset("codelion/finewiki-10M")
67
  # Access the training data
68
  for example in dataset['train']:
69
  print(example['text'])
 
 
70
  ```
71
 
72
  ## Dataset Structure
73
 
74
- Each example contains:
75
- - `text`: The Wikipedia article text
76
- - `url`: Source Wikipedia URL
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
  ## Use Cases
79
 
@@ -85,17 +116,7 @@ This sampled dataset is ideal for:
85
 
86
  ## Citation
87
 
88
- If you use this dataset, please cite both the original FineWiki dataset and mention the sampling methodology:
89
-
90
- ```bibtex
91
- @dataset{finewiki_sampled_10000000,
92
- title={FineWiki Sampled Dataset (10,000,000 tokens)},
93
- author={CodeLion},
94
- year={2025},
95
- howpublished={\url{codelion/finewiki-10M}},
96
- note={Sampled from HuggingFaceFW/finewiki using reservoir sampling}
97
- }
98
- ```
99
 
100
  ## License
101
 
@@ -103,8 +124,8 @@ Apache 2.0 (same as original FineWiki dataset)
103
 
104
  ## Dataset Card Authors
105
 
106
- codelion
107
 
108
  ## Dataset Card Contact
109
 
110
- For questions or issues, please open an issue on the dataset repository.
 
10
  features:
11
  - name: text
12
  dtype: string
13
+ - name: id
14
+ dtype: string
15
+ - name: wikiname
16
+ dtype: string
17
+ - name: page_id
18
+ dtype: int64
19
+ - name: title
20
+ dtype: string
21
  - name: url
22
  dtype: string
23
+ - name: date_modified
24
+ dtype: string
25
+ - name: in_language
26
+ dtype: string
27
+ - name: wikidata_id
28
+ dtype: string
29
+ - name: bytes_html
30
+ dtype: int64
31
+ - name: wikitext
32
+ dtype: string
33
+ - name: version
34
+ dtype: int64
35
+ - name: infoboxes
36
+ dtype: string
37
+ - name: has_math
38
+ dtype: bool
39
  splits:
40
  - name: train
41
+ num_examples: 7088
 
 
 
42
  ---
43
 
44
  # FineWiki Sampled Dataset (10,000,000 tokens)
45
 
46
+ This is a sampled subset of [HuggingFaceFW/finewiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki) containing approximately **10,000,000 tokens**.
47
 
48
  ## Dataset Details
49
 
 
51
  - **Original Dataset**: HuggingFaceFW/finewiki (English subset, train split)
52
  - **Sampling Method**: Reservoir sampling (unbiased random sampling)
53
  - **Target Token Count**: 10,000,000 tokens
 
54
  - **Tokenizer**: GPT-2 (50,257 vocabulary)
55
 
56
  ### Sampling Statistics
57
+ - **Documents Sampled**: 7,088
58
+ - **Average Tokens/Doc**: 1411.0
 
 
59
  - **Random Seed**: 42
60
 
61
  ### Sampling Method
 
69
  The sampling algorithm:
70
  1. Streams through HuggingFaceFW/finewiki without downloading
71
  2. Uses GPT-2 tokenizer to count tokens per document
72
+ 3. Maintains a reservoir of documents using standard reservoir sampling
73
+ 4. Stops when target token count is reached
 
 
74
 
75
  ## Usage
76
 
 
83
  # Access the training data
84
  for example in dataset['train']:
85
  print(example['text'])
86
+ print(example['title'])
87
+ print(example['url'])
88
  ```
89
 
90
  ## Dataset Structure
91
 
92
+ Each example contains all fields from the original FineWiki dataset:
93
+
94
+ - **text** (string): The Wikipedia article text (primary content)
95
+ - **id** (string): Unique identifier
96
+ - **wikiname** (string): Wikipedia source name
97
+ - **page_id** (int64): Wikipedia page ID
98
+ - **title** (string): Article title
99
+ - **url** (string): Source Wikipedia URL
100
+ - **date_modified** (string): Last modification date
101
+ - **in_language** (string): Language code (always 'en' for this subset)
102
+ - **wikidata_id** (string): Wikidata identifier
103
+ - **bytes_html** (int64): Size of HTML content
104
+ - **wikitext** (string): Original wikitext markup
105
+ - **version** (int64): Article version number
106
+ - **infoboxes** (string): Extracted infobox data
107
+ - **has_math** (bool): Whether article contains mathematical formulas
108
 
109
  ## Use Cases
110
 
 
116
 
117
  ## Citation
118
 
119
+ If you use this dataset, please cite both the original FineWiki dataset and mention the sampling methodology.
 
 
 
 
 
 
 
 
 
 
120
 
121
  ## License
122
 
 
124
 
125
  ## Dataset Card Authors
126
 
127
+ CodeLion
128
 
129
  ## Dataset Card Contact
130
 
131
+ For questions or issues, please open an issue on the dataset repository.