| --- |
| license: apache-2.0 |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: data/train-* |
| dataset_info: |
| features: |
| - name: text |
| dtype: string |
| - name: id |
| dtype: string |
| - name: dump |
| dtype: string |
| - name: url |
| dtype: string |
| - name: file_path |
| dtype: string |
| - name: language |
| dtype: string |
| - name: language_score |
| dtype: float64 |
| - name: token_count |
| dtype: int64 |
| - name: score |
| dtype: float64 |
| - name: int_score |
| dtype: int64 |
| - name: raw_text |
| dtype: string |
| - name: document_id |
| dtype: string |
| - name: overlap_score |
| dtype: float64 |
| splits: |
| - name: train |
| num_bytes: 1049455 |
| num_examples: 100 |
| download_size: 625799 |
| dataset_size: 1049455 |
| --- |
| |
| # FineWeb-Edu GPT-2 Tokenized Dataset |
|
|
| **Repository:** `LaughTaleAI/fineweb-edu-gpt2-tokenized` |
|
|
| This dataset contains a **tokenized version of the FineWeb-Edu dataset** using the **GPT-2 tokenizer** (`tiktoken`). |
| The dataset is optimized for **training GPT-style causal language models** and stored as **binary token shards** for maximum training throughput. |
|
|
| --- |
|
|
| # Overview |
|
|
| This dataset converts the original **FineWeb-Edu text corpus** into a **continuous stream of GPT-2 tokens** and stores them in binary shards. |
|
|
| The format is designed for: |
|
|
| - fast training |
| - minimal preprocessing overhead |
| - efficient dataloading |
| - compatibility with GPT-style architectures |
|
|
| Each file contains a **contiguous token stream** that can be randomly sampled during training. |
|
|
| --- |
|
|
| # Dataset Format |
|
|
| Each file is a **binary `.bin` file** containing tokens encoded as: |
|
|
| ``` |
| |
| dtype = uint16 |
| |
| ``` |
|
|
| Each token corresponds to a **GPT-2 vocabulary token id**. |
|
|
| Example layout of a shard: |
|
|
| ``` |
| |
| train_00000.bin |
| train_00001.bin |
| train_00002.bin |
| ... |
| |
| ``` |
|
|
| Each shard contains approximately: |
|
|
| ``` |
| |
| 100M tokens per file |
| |
| ``` |
|
|
| (Actual size may vary slightly depending on the final shard.) |
|
|
| Binary size per shard: |
|
|
| ``` |
| |
| ~200MB per file |
| |
| ``` |
|
|
| --- |
|
|
| # Tokenization Details |
|
|
| Tokenization was performed using: |
|
|
| ``` |
| |
| Tokenizer: GPT-2 BPE |
| Library: tiktoken |
| Vocabulary size: 50,257 |
| |
| ``` |
|
|
| Special tokens: |
|
|
| ``` |
| |
| <|endoftext|> (50256) |
| |
| ``` |
|
|
| An **EOS token is appended after every document** to preserve document boundaries. |
|
|
| Example token sequence: |
|
|
| ``` |
| |
| [doc1 tokens] <EOS> [doc2 tokens] <EOS> [doc3 tokens] |
| |
| ``` |
|
|
| --- |
|
|
| # Preprocessing Pipeline |
|
|
| The preprocessing pipeline performs: |
|
|
| 1. Load FineWeb-Edu parquet shards |
| 2. Tokenize text using GPT-2 tokenizer |
| 3. Append EOS token after each document |
| 4. Concatenate tokens into a continuous stream |
| 5. Write tokens into binary shards |
|
|
| The resulting dataset is **fully deterministic and reproducible**. |
|
|
| --- |
|
|
| # Training Usage |
|
|
| This dataset is designed for **GPT-style causal language modeling**. |
|
|
| Typical training workflow: |
|
|
| ``` |
| |
| 1. Load .bin shard using numpy.memmap |
| 2. Randomly sample token offsets |
| 3. Extract fixed length sequences |
| 4. Train autoregressive model |
| |
| ```` |
|
|
| Example: |
|
|
| ```python |
| import numpy as np |
| |
| data = np.memmap("train_00000.bin", dtype=np.uint16, mode="r") |
| |
| seq_len = 512 |
| start = np.random.randint(0, len(data) - seq_len - 1) |
| |
| x = data[start:start+seq_len] |
| y = data[start+1:start+seq_len+1] |
| ```` |
|
|
| This avoids padding and enables extremely fast dataloading. |
|
|
| --- |
|
|
| # Advantages of Binary Token Datasets |
|
|
| Compared to text datasets: |
|
|
| | Feature | Text Dataset | Token Dataset | |
| | ------------------- | ------------ | -------------- | |
| | Tokenization cost | high | none | |
| | Training throughput | medium | very high | |
| | Disk size | larger | smaller | |
| | Loading speed | slower | extremely fast | |
|
|
| Binary token datasets are widely used in large-scale LLM training pipelines. |
|
|
| --- |
|
|
| # Dataset Source |
|
|
| Original dataset: |
|
|
| ``` |
| karpathy/fineweb-edu-100b-shuffle |
| ``` |
|
|
| Source repository: |
|
|
| [https://huggingface.co/datasets/karpathy/fineweb-edu-100b-shuffle](https://huggingface.co/datasets/karpathy/fineweb-edu-100b-shuffle) |
|
|
| The dataset contains **educational web text filtered for high quality content**. |
|
|
| --- |
|
|
| # Intended Use |
|
|
| This dataset is suitable for: |
|
|
| * GPT-style language model pretraining |
| * research experiments |
| * tokenizer experiments |
| * training small to medium sized LLMs |
|
|
| --- |
|
|
| # Example Training Setup |
|
|
| Typical configuration used with this dataset: |
|
|
| ``` |
| sequence length: 512 |
| batch size: 256 |
| optimizer: AdamW |
| learning rate: 3e-4 |
| ``` |
|
|
| The dataset can support **millions of training sequences** through random sampling. |
|
|
| --- |
|
|
| # License |
|
|
| This dataset inherits the license of the original **FineWeb-Edu dataset**. |
|
|
| Please refer to the original dataset repository for licensing details. |
|
|
| --- |
|
|
| # Citation |
|
|
| If you use this dataset, please cite the original FineWeb dataset. |
|
|
| ``` |
| @dataset{fineweb, |
| title = {FineWeb Dataset}, |
| year = {2024}, |
| publisher = {HuggingFace} |
| } |
| ``` |
|
|
| --- |
|
|
| # Acknowledgements |
|
|
| Thanks to the creators of: |
|
|
| * FineWeb dataset |
| * Hugging Face Datasets |
| * tiktoken tokenizer |
|
|
| ```` |
| |
| --- |
| |
| # ⭐ Optional (Recommended) |
| |
| You may also add a small metadata file: |
| |
| `meta.json` |
| |
| ```json |
| { |
| "tokenizer": "gpt2", |
| "vocab_size": 50257, |
| "dtype": "uint16", |
| "tokens_per_shard": 100000000, |
| "format": "binary_token_stream" |
| } |
| ```` |