File size: 5,174 Bytes
b19cdee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
---
license: apache-2.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: dump
    dtype: string
  - name: url
    dtype: string
  - name: file_path
    dtype: string
  - name: language
    dtype: string
  - name: language_score
    dtype: float64
  - name: token_count
    dtype: int64
  - name: score
    dtype: float64
  - name: int_score
    dtype: int64
  - name: raw_text
    dtype: string
  - name: document_id
    dtype: string
  - name: overlap_score
    dtype: float64
  splits:
  - name: train
    num_bytes: 1049455
    num_examples: 100
  download_size: 625799
  dataset_size: 1049455
---

# FineWeb-Edu GPT-2 Tokenized Dataset

**Repository:** `LaughTaleAI/fineweb-edu-gpt2-tokenized`

This dataset contains a **tokenized version of the FineWeb-Edu dataset** using the **GPT-2 tokenizer** (`tiktoken`).  
The dataset is optimized for **training GPT-style causal language models** and stored as **binary token shards** for maximum training throughput.

---

# Overview

This dataset converts the original **FineWeb-Edu text corpus** into a **continuous stream of GPT-2 tokens** and stores them in binary shards.  

The format is designed for:

- fast training
- minimal preprocessing overhead
- efficient dataloading
- compatibility with GPT-style architectures

Each file contains a **contiguous token stream** that can be randomly sampled during training.

---

# Dataset Format

Each file is a **binary `.bin` file** containing tokens encoded as:

```

dtype = uint16

```

Each token corresponds to a **GPT-2 vocabulary token id**.

Example layout of a shard:

```

train_00000.bin
train_00001.bin
train_00002.bin
...

```

Each shard contains approximately:

```

100M tokens per file

```

(Actual size may vary slightly depending on the final shard.)

Binary size per shard:

```

~200MB per file

```

---

# Tokenization Details

Tokenization was performed using:

```

Tokenizer: GPT-2 BPE
Library: tiktoken
Vocabulary size: 50,257

```

Special tokens:

```

<|endoftext|> (50256)

```

An **EOS token is appended after every document** to preserve document boundaries.

Example token sequence:

```

[doc1 tokens] <EOS> [doc2 tokens] <EOS> [doc3 tokens]

```

---

# Preprocessing Pipeline

The preprocessing pipeline performs:

1. Load FineWeb-Edu parquet shards
2. Tokenize text using GPT-2 tokenizer
3. Append EOS token after each document
4. Concatenate tokens into a continuous stream
5. Write tokens into binary shards

The resulting dataset is **fully deterministic and reproducible**.

---

# Training Usage

This dataset is designed for **GPT-style causal language modeling**.

Typical training workflow:

```

1. Load .bin shard using numpy.memmap
2. Randomly sample token offsets
3. Extract fixed length sequences
4. Train autoregressive model

````

Example:

```python
import numpy as np

data = np.memmap("train_00000.bin", dtype=np.uint16, mode="r")

seq_len = 512
start = np.random.randint(0, len(data) - seq_len - 1)

x = data[start:start+seq_len]
y = data[start+1:start+seq_len+1]
````

This avoids padding and enables extremely fast dataloading.

---

# Advantages of Binary Token Datasets

Compared to text datasets:

| Feature             | Text Dataset | Token Dataset  |
| ------------------- | ------------ | -------------- |
| Tokenization cost   | high         | none           |
| Training throughput | medium       | very high      |
| Disk size           | larger       | smaller        |
| Loading speed       | slower       | extremely fast |

Binary token datasets are widely used in large-scale LLM training pipelines.

---

# Dataset Source

Original dataset:

```
karpathy/fineweb-edu-100b-shuffle
```

Source repository:

[https://huggingface.co/datasets/karpathy/fineweb-edu-100b-shuffle](https://huggingface.co/datasets/karpathy/fineweb-edu-100b-shuffle)

The dataset contains **educational web text filtered for high quality content**.

---

# Intended Use

This dataset is suitable for:

* GPT-style language model pretraining
* research experiments
* tokenizer experiments
* training small to medium sized LLMs

---

# Example Training Setup

Typical configuration used with this dataset:

```
sequence length: 512
batch size: 256
optimizer: AdamW
learning rate: 3e-4
```

The dataset can support **millions of training sequences** through random sampling.

---

# License

This dataset inherits the license of the original **FineWeb-Edu dataset**.

Please refer to the original dataset repository for licensing details.

---

# Citation

If you use this dataset, please cite the original FineWeb dataset.

```
@dataset{fineweb,
  title = {FineWeb Dataset},
  year = {2024},
  publisher = {HuggingFace}
}
```

---

# Acknowledgements

Thanks to the creators of:

* FineWeb dataset
* Hugging Face Datasets
* tiktoken tokenizer

````

---

# ⭐ Optional (Recommended)

You may also add a small metadata file:

`meta.json`

```json
{
  "tokenizer": "gpt2",
  "vocab_size": 50257,
  "dtype": "uint16",
  "tokens_per_shard": 100000000,
  "format": "binary_token_stream"
}
````