Datasets:
Update dataset card with correct schema
Browse files
README.md
CHANGED
|
@@ -10,17 +10,40 @@ dataset_info:
|
|
| 10 |
features:
|
| 11 |
- name: text
|
| 12 |
dtype: string
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
- name: url
|
| 14 |
dtype: string
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
splits:
|
| 16 |
- name: train
|
| 17 |
-
|
| 18 |
-
num_examples: 67976
|
| 19 |
---
|
| 20 |
|
| 21 |
# FineWiki Sampled Dataset (100,000,000 tokens)
|
| 22 |
|
| 23 |
-
This is a sampled subset of [HuggingFaceFW/finewiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki) containing approximately **100,000,
|
| 24 |
|
| 25 |
## Dataset Details
|
| 26 |
|
|
@@ -28,14 +51,11 @@ This is a sampled subset of [HuggingFaceFW/finewiki](https://huggingface.co/data
|
|
| 28 |
- **Original Dataset**: HuggingFaceFW/finewiki (English subset, train split)
|
| 29 |
- **Sampling Method**: Reservoir sampling (unbiased random sampling)
|
| 30 |
- **Target Token Count**: 100,000,000 tokens
|
| 31 |
-
- **Actual Token Count**: 100,000,047 tokens
|
| 32 |
- **Tokenizer**: GPT-2 (50,257 vocabulary)
|
| 33 |
|
| 34 |
### Sampling Statistics
|
| 35 |
-
- **Documents Sampled**:
|
| 36 |
-
- **
|
| 37 |
-
- **Tokens Processed**: 100,000,047
|
| 38 |
-
- **Sampling Rate**: 1.0000
|
| 39 |
- **Random Seed**: 42
|
| 40 |
|
| 41 |
### Sampling Method
|
|
@@ -49,10 +69,8 @@ This dataset was created using **reservoir sampling**, which ensures:
|
|
| 49 |
The sampling algorithm:
|
| 50 |
1. Streams through HuggingFaceFW/finewiki without downloading
|
| 51 |
2. Uses GPT-2 tokenizer to count tokens per document
|
| 52 |
-
3. Maintains a reservoir of documents
|
| 53 |
-
4.
|
| 54 |
-
- k = reservoir size, n = total documents seen
|
| 55 |
-
5. Guarantees uniform random sample across entire dataset
|
| 56 |
|
| 57 |
## Usage
|
| 58 |
|
|
@@ -65,13 +83,28 @@ dataset = load_dataset("codelion/finewiki-100M")
|
|
| 65 |
# Access the training data
|
| 66 |
for example in dataset['train']:
|
| 67 |
print(example['text'])
|
|
|
|
|
|
|
| 68 |
```
|
| 69 |
|
| 70 |
## Dataset Structure
|
| 71 |
|
| 72 |
-
Each example contains:
|
| 73 |
-
|
| 74 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
## Use Cases
|
| 77 |
|
|
@@ -83,17 +116,7 @@ This sampled dataset is ideal for:
|
|
| 83 |
|
| 84 |
## Citation
|
| 85 |
|
| 86 |
-
If you use this dataset, please cite both the original FineWiki dataset and mention the sampling methodology
|
| 87 |
-
|
| 88 |
-
```bibtex
|
| 89 |
-
@dataset{finewiki_sampled_100000000,
|
| 90 |
-
title={FineWiki Sampled Dataset (100,000,000 tokens)},
|
| 91 |
-
author={CodeLion},
|
| 92 |
-
year={2025},
|
| 93 |
-
howpublished={\url{codelion/finewiki-100M}},
|
| 94 |
-
note={Sampled from HuggingFaceFW/finewiki using reservoir sampling}
|
| 95 |
-
}
|
| 96 |
-
```
|
| 97 |
|
| 98 |
## License
|
| 99 |
|
|
|
|
| 10 |
features:
|
| 11 |
- name: text
|
| 12 |
dtype: string
|
| 13 |
+
- name: id
|
| 14 |
+
dtype: string
|
| 15 |
+
- name: wikiname
|
| 16 |
+
dtype: string
|
| 17 |
+
- name: page_id
|
| 18 |
+
dtype: int64
|
| 19 |
+
- name: title
|
| 20 |
+
dtype: string
|
| 21 |
- name: url
|
| 22 |
dtype: string
|
| 23 |
+
- name: date_modified
|
| 24 |
+
dtype: string
|
| 25 |
+
- name: in_language
|
| 26 |
+
dtype: string
|
| 27 |
+
- name: wikidata_id
|
| 28 |
+
dtype: string
|
| 29 |
+
- name: bytes_html
|
| 30 |
+
dtype: int64
|
| 31 |
+
- name: wikitext
|
| 32 |
+
dtype: string
|
| 33 |
+
- name: version
|
| 34 |
+
dtype: int64
|
| 35 |
+
- name: infoboxes
|
| 36 |
+
dtype: string
|
| 37 |
+
- name: has_math
|
| 38 |
+
dtype: bool
|
| 39 |
splits:
|
| 40 |
- name: train
|
| 41 |
+
num_examples: 53131
|
|
|
|
| 42 |
---
|
| 43 |
|
| 44 |
# FineWiki Sampled Dataset (100,000,000 tokens)
|
| 45 |
|
| 46 |
+
This is a sampled subset of [HuggingFaceFW/finewiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki) containing approximately **100,000,000 tokens**.
|
| 47 |
|
| 48 |
## Dataset Details
|
| 49 |
|
|
|
|
| 51 |
- **Original Dataset**: HuggingFaceFW/finewiki (English subset, train split)
|
| 52 |
- **Sampling Method**: Reservoir sampling (unbiased random sampling)
|
| 53 |
- **Target Token Count**: 100,000,000 tokens
|
|
|
|
| 54 |
- **Tokenizer**: GPT-2 (50,257 vocabulary)
|
| 55 |
|
| 56 |
### Sampling Statistics
|
| 57 |
+
- **Documents Sampled**: 53,131
|
| 58 |
+
- **Average Tokens/Doc**: 1882.2
|
|
|
|
|
|
|
| 59 |
- **Random Seed**: 42
|
| 60 |
|
| 61 |
### Sampling Method
|
|
|
|
| 69 |
The sampling algorithm:
|
| 70 |
1. Streams through HuggingFaceFW/finewiki without downloading
|
| 71 |
2. Uses GPT-2 tokenizer to count tokens per document
|
| 72 |
+
3. Maintains a reservoir of documents using standard reservoir sampling
|
| 73 |
+
4. Stops when target token count is reached
|
|
|
|
|
|
|
| 74 |
|
| 75 |
## Usage
|
| 76 |
|
|
|
|
| 83 |
# Access the training data
|
| 84 |
for example in dataset['train']:
|
| 85 |
print(example['text'])
|
| 86 |
+
print(example['title'])
|
| 87 |
+
print(example['url'])
|
| 88 |
```
|
| 89 |
|
| 90 |
## Dataset Structure
|
| 91 |
|
| 92 |
+
Each example contains all fields from the original FineWiki dataset:
|
| 93 |
+
|
| 94 |
+
- **text** (string): The Wikipedia article text (primary content)
|
| 95 |
+
- **id** (string): Unique identifier
|
| 96 |
+
- **wikiname** (string): Wikipedia source name
|
| 97 |
+
- **page_id** (int64): Wikipedia page ID
|
| 98 |
+
- **title** (string): Article title
|
| 99 |
+
- **url** (string): Source Wikipedia URL
|
| 100 |
+
- **date_modified** (string): Last modification date
|
| 101 |
+
- **in_language** (string): Language code (always 'en' for this subset)
|
| 102 |
+
- **wikidata_id** (string): Wikidata identifier
|
| 103 |
+
- **bytes_html** (int64): Size of HTML content
|
| 104 |
+
- **wikitext** (string): Original wikitext markup
|
| 105 |
+
- **version** (int64): Article version number
|
| 106 |
+
- **infoboxes** (string): Extracted infobox data
|
| 107 |
+
- **has_math** (bool): Whether article contains mathematical formulas
|
| 108 |
|
| 109 |
## Use Cases
|
| 110 |
|
|
|
|
| 116 |
|
| 117 |
## Citation
|
| 118 |
|
| 119 |
+
If you use this dataset, please cite both the original FineWiki dataset and mention the sampling methodology.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 120 |
|
| 121 |
## License
|
| 122 |
|