abhisheksrvt commited on
Commit
2a6f88e
·
verified ·
1 Parent(s): f31746a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -9,7 +9,7 @@ This dataset contains preprocessed and token-packed `.bin` files intended for us
9
 
10
  - Each `.bin` file contains a fixed number of samples, where each sample is exactly 8192 tokens long.
11
  - Samples are grouped into batches of 125 samples, totaling **1.024 million tokens per batch**.
12
- - Each file (called a "block") contains 12500 samples (approximately 102.4 million tokens).
13
  - All samples are tokenized using the `GPT2TokenizerFast` from Hugging Face Transformers.
14
 
15
  ## Structure
@@ -35,7 +35,7 @@ Tokens were drawn from a diverse mix of high-quality open datasets:
35
  - `PAQ`
36
  - `Natural Questions`
37
 
38
- Each dataset was assigned a token quota (e.g., 10B tokens) to ensure a balanced mix.
39
 
40
  ## Preprocessing & Packing Strategy
41
 
 
9
 
10
  - Each `.bin` file contains a fixed number of samples, where each sample is exactly 8192 tokens long.
11
  - Samples are grouped into batches of 125 samples, totaling **1.024 million tokens per batch**.
12
+ - Each file (called a "block") contains 62500 samples (approximately 512 million tokens).
13
  - All samples are tokenized using the `GPT2TokenizerFast` from Hugging Face Transformers.
14
 
15
  ## Structure
 
35
  - `PAQ`
36
  - `Natural Questions`
37
 
38
+ Each dataset was assigned a token quota to ensure a balanced mix.
39
 
40
  ## Preprocessing & Packing Strategy
41