File size: 2,146 Bytes
fd4e5cd
 
 
 
 
 
 
 
 
 
 
2a6f88e
fd4e5cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2a6f88e
fd4e5cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
license: mit
---
# Stage 1 Packed Pretraining Dataset

This dataset contains preprocessed and token-packed `.bin` files intended for use in pretraining a decoder-only Transformer language model.

## Dataset Contents

- Each `.bin` file contains a fixed number of samples, where each sample is exactly 8192 tokens long.
- Samples are grouped into batches of 125 samples, totaling **1.024 million tokens per batch**.
- Each file (called a "block") contains 62500 samples (approximately 512 million tokens).
- All samples are tokenized using the `GPT2TokenizerFast` from Hugging Face Transformers.

## Structure

- Format: Binary files (`int32`) containing token IDs.
- File naming: `stage1_block_0000.bin`, `stage1_block_0001.bin`, etc.
- Tokenizer: `GPT2TokenizerFast` with `eos_token` used as a separator and padding token.
- Context length: 8192 tokens per sample.

## Source Datasets

Tokens were drawn from a diverse mix of high-quality open datasets:

- `C4 (en)`
- `Wikipedia (2023/11 dump)`
- `OpenWebText`
- `CCNews`
- `Gutenberg`
- `arXiv`
- `BookCorpus Open`
- `S2ORC`
- `TriviaQA`
- `PAQ`
- `Natural Questions`

Each dataset was assigned a token quota to ensure a balanced mix.

## Preprocessing & Packing Strategy

- Samples were **streamed** using Hugging Face Datasets with shuffling.
- Texts were **tokenized**, filtered using a garbage filter, and concatenated with separator tokens.
- Samples were packed into fixed-length chunks of 8192 tokens.
- Leftover tokens from one batch are carried forward to the next to ensure no token duplication or loss.

### Garbage Filtering Heuristics:
- Removed texts with:
  - Too few words or characters.
  - High symbol-to-alphanumeric ratio.
  - Excessive character repetition.
  - Very low word diversity.

## Usage Example

You can load and decode tokens using PyTorch:

```python
import torch
from transformers import GPT2TokenizerFast

tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
with open("stage1_block_0000.bin", "rb") as f:
    tokens = torch.frombuffer(f.read(), dtype=torch.int32)
    sample = tokens[:8192].tolist()
    text = tokenizer.decode(sample)
    print(text)