abhisheksrvt commited on
Commit
fd4e5cd
·
verified ·
1 Parent(s): e9a7221

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -3
README.md CHANGED
@@ -1,3 +1,67 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # Stage 1 Packed Pretraining Dataset
5
+
6
+ This dataset contains preprocessed and token-packed `.bin` files intended for use in pretraining a decoder-only Transformer language model.
7
+
8
+ ## Dataset Contents
9
+
10
+ - Each `.bin` file contains a fixed number of samples, where each sample is exactly 8192 tokens long.
11
+ - Samples are grouped into batches of 125 samples, totaling **1.024 million tokens per batch**.
12
+ - Each file (called a "block") contains 12500 samples (approximately 102.4 million tokens).
13
+ - All samples are tokenized using the `GPT2TokenizerFast` from Hugging Face Transformers.
14
+
15
+ ## Structure
16
+
17
+ - Format: Binary files (`int32`) containing token IDs.
18
+ - File naming: `stage1_block_0000.bin`, `stage1_block_0001.bin`, etc.
19
+ - Tokenizer: `GPT2TokenizerFast` with `eos_token` used as a separator and padding token.
20
+ - Context length: 8192 tokens per sample.
21
+
22
+ ## Source Datasets
23
+
24
+ Tokens were drawn from a diverse mix of high-quality open datasets:
25
+
26
+ - `C4 (en)`
27
+ - `Wikipedia (2023/11 dump)`
28
+ - `OpenWebText`
29
+ - `CCNews`
30
+ - `Gutenberg`
31
+ - `arXiv`
32
+ - `BookCorpus Open`
33
+ - `S2ORC`
34
+ - `TriviaQA`
35
+ - `PAQ`
36
+ - `Natural Questions`
37
+
38
+ Each dataset was assigned a token quota (e.g., 10B tokens) to ensure a balanced mix.
39
+
40
+ ## Preprocessing & Packing Strategy
41
+
42
+ - Samples were **streamed** using Hugging Face Datasets with shuffling.
43
+ - Texts were **tokenized**, filtered using a garbage filter, and concatenated with separator tokens.
44
+ - Samples were packed into fixed-length chunks of 8192 tokens.
45
+ - Leftover tokens from one batch are carried forward to the next to ensure no token duplication or loss.
46
+
47
+ ### Garbage Filtering Heuristics:
48
+ - Removed texts with:
49
+ - Too few words or characters.
50
+ - High symbol-to-alphanumeric ratio.
51
+ - Excessive character repetition.
52
+ - Very low word diversity.
53
+
54
+ ## Usage Example
55
+
56
+ You can load and decode tokens using PyTorch:
57
+
58
+ ```python
59
+ import torch
60
+ from transformers import GPT2TokenizerFast
61
+
62
+ tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
63
+ with open("stage1_block_0000.bin", "rb") as f:
64
+ tokens = torch.frombuffer(f.read(), dtype=torch.int32)
65
+ sample = tokens[:8192].tolist()
66
+ text = tokenizer.decode(sample)
67
+ print(text)