metadata
license: mit
Stage 1 Packed Pretraining Dataset
This dataset contains preprocessed and token-packed .bin files intended for use in pretraining a decoder-only Transformer language model.
Dataset Contents
- Each
.binfile contains a fixed number of samples, where each sample is exactly 8192 tokens long. - Samples are grouped into batches of 125 samples, totaling 1.024 million tokens per batch.
- Each file (called a "block") contains 62500 samples (approximately 512 million tokens).
- All samples are tokenized using the
GPT2TokenizerFastfrom Hugging Face Transformers.
Structure
- Format: Binary files (
int32) containing token IDs. - File naming:
stage1_block_0000.bin,stage1_block_0001.bin, etc. - Tokenizer:
GPT2TokenizerFastwitheos_tokenused as a separator and padding token. - Context length: 8192 tokens per sample.
Source Datasets
Tokens were drawn from a diverse mix of high-quality open datasets:
C4 (en)Wikipedia (2023/11 dump)OpenWebTextCCNewsGutenbergarXivBookCorpus OpenS2ORCTriviaQAPAQNatural Questions
Each dataset was assigned a token quota to ensure a balanced mix.
Preprocessing & Packing Strategy
- Samples were streamed using Hugging Face Datasets with shuffling.
- Texts were tokenized, filtered using a garbage filter, and concatenated with separator tokens.
- Samples were packed into fixed-length chunks of 8192 tokens.
- Leftover tokens from one batch are carried forward to the next to ensure no token duplication or loss.
Garbage Filtering Heuristics:
- Removed texts with:
- Too few words or characters.
- High symbol-to-alphanumeric ratio.
- Excessive character repetition.
- Very low word diversity.
Usage Example
You can load and decode tokens using PyTorch:
import torch
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
with open("stage1_block_0000.bin", "rb") as f:
tokens = torch.frombuffer(f.read(), dtype=torch.int32)
sample = tokens[:8192].tolist()
text = tokenizer.decode(sample)
print(text)