Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10M - 100M
License:
| license: odc-by | |
| task_categories: | |
| - text-generation | |
| language: | |
| - en | |
| tags: | |
| - fineweb | |
| - fineweb-edu | |
| - pretraining | |
| size_categories: | |
| - 1B<n<10B | |
| # FineWeb-Sample-5.97B-512 | |
| ## Dataset Description | |
| This dataset contains approximately **5.97 billion tokens** (5,968,954,880 tokens) sampled from the [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) dataset. Each text sample is capped at a maximum of **512 tokens**. | |
| ### Dataset Statistics | |
| - **Total Tokens**: ~5.97B (5,968,954,880) | |
| - **Max Tokens per Sample**: 512 | |
| - **Max Characters per Sample**: 5,120 (10 chars/token estimate) | |
| - **Source Dataset**: FineWeb-Edu 350BT | |
| - **Random Seed**: 42 | |
| ### Dataset Structure | |
| The dataset is stored in chunked Parquet files with the following columns: | |
| - `text`: The text content (string, max 5,120 characters) | |
| - `token_count`: Number of tokens in the text (integer, max 512) | |
| ### Intended Use | |
| This dataset is designed for: | |
| - Language model pretraining experiments | |
| - Chinchilla-optimal scaling experiments | |
| ### Source | |
| Sampled from the [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) dataset, which is a filtered subset of FineWeb focusing on educational content. | |
| ### License | |
| This dataset inherits the ODC-By license from FineWeb-Edu. |