|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- wikipedia |
|
|
- finewiki |
|
|
- sampled |
|
|
dataset_info: |
|
|
features: |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: wikiname |
|
|
dtype: string |
|
|
- name: page_id |
|
|
dtype: int64 |
|
|
- name: title |
|
|
dtype: string |
|
|
- name: url |
|
|
dtype: string |
|
|
- name: date_modified |
|
|
dtype: string |
|
|
- name: in_language |
|
|
dtype: string |
|
|
- name: wikidata_id |
|
|
dtype: string |
|
|
- name: bytes_html |
|
|
dtype: int64 |
|
|
- name: wikitext |
|
|
dtype: string |
|
|
- name: version |
|
|
dtype: int64 |
|
|
- name: infoboxes |
|
|
dtype: string |
|
|
- name: has_math |
|
|
dtype: bool |
|
|
splits: |
|
|
- name: train |
|
|
num_examples: 7088 |
|
|
--- |
|
|
|
|
|
# FineWiki Sampled Dataset (10,000,000 tokens) |
|
|
|
|
|
This is a sampled subset of [HuggingFaceFW/finewiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki) containing approximately **10,000,000 tokens**. |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
### Source |
|
|
- **Original Dataset**: HuggingFaceFW/finewiki (English subset, train split) |
|
|
- **Sampling Method**: Reservoir sampling (unbiased random sampling) |
|
|
- **Target Token Count**: 10,000,000 tokens |
|
|
- **Tokenizer**: GPT-2 (50,257 vocabulary) |
|
|
|
|
|
### Sampling Statistics |
|
|
- **Documents Sampled**: 7,088 |
|
|
- **Average Tokens/Doc**: 1411.0 |
|
|
- **Random Seed**: 42 |
|
|
|
|
|
### Sampling Method |
|
|
|
|
|
This dataset was created using **reservoir sampling**, which ensures: |
|
|
- ✅ Unbiased random sample from the full dataset |
|
|
- ✅ Every document has equal probability of being selected |
|
|
- ✅ No distribution bias (early/late documents equally represented) |
|
|
- ✅ Streaming-based (no need to download full dataset) |
|
|
|
|
|
The sampling algorithm: |
|
|
1. Streams through HuggingFaceFW/finewiki without downloading |
|
|
2. Uses GPT-2 tokenizer to count tokens per document |
|
|
3. Maintains a reservoir of documents using standard reservoir sampling |
|
|
4. Stops when target token count is reached |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the dataset |
|
|
dataset = load_dataset("codelion/finewiki-10M") |
|
|
|
|
|
# Access the training data |
|
|
for example in dataset['train']: |
|
|
print(example['text']) |
|
|
print(example['title']) |
|
|
print(example['url']) |
|
|
``` |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
Each example contains all fields from the original FineWiki dataset: |
|
|
|
|
|
- **text** (string): The Wikipedia article text (primary content) |
|
|
- **id** (string): Unique identifier |
|
|
- **wikiname** (string): Wikipedia source name |
|
|
- **page_id** (int64): Wikipedia page ID |
|
|
- **title** (string): Article title |
|
|
- **url** (string): Source Wikipedia URL |
|
|
- **date_modified** (string): Last modification date |
|
|
- **in_language** (string): Language code (always 'en' for this subset) |
|
|
- **wikidata_id** (string): Wikidata identifier |
|
|
- **bytes_html** (int64): Size of HTML content |
|
|
- **wikitext** (string): Original wikitext markup |
|
|
- **version** (int64): Article version number |
|
|
- **infoboxes** (string): Extracted infobox data |
|
|
- **has_math** (bool): Whether article contains mathematical formulas |
|
|
|
|
|
## Use Cases |
|
|
|
|
|
This sampled dataset is ideal for: |
|
|
- 🔬 Small-scale language model pretraining experiments |
|
|
- 📊 Dataset composition studies |
|
|
- ⚡ Quick prototyping and testing |
|
|
- 💰 Low-cost training runs |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this model/dataset, please cite: |
|
|
|
|
|
```bibtex |
|
|
@article{sharma2025billion, |
|
|
title={The 1 Billion Token Challenge: Finding the Perfect Pre-training Mix}, |
|
|
author={Sharma, Asankhaya}, |
|
|
year={2025}, |
|
|
url={https://huggingface.co/blog/codelion/optimal-dataset-mixing/} |
|
|
} |
|
|
``` |
|
|
|
|
|
For more details, see the [blog post](https://huggingface.co/blog/codelion/optimal-dataset-mixing/). |
|
|
|
|
|
## License |
|
|
|
|
|
Apache 2.0 (same as original FineWiki dataset) |
|
|
|
|
|
## Dataset Card Authors |
|
|
|
|
|
CodeLion |
|
|
|
|
|
## Dataset Card Contact |
|
|
|
|
|
For questions or issues, please open an issue on the dataset repository. |
|
|
|