Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ This dataset contains preprocessed and token-packed `.bin` files intended for us
|
|
| 9 |
|
| 10 |
- Each `.bin` file contains a fixed number of samples, where each sample is exactly 8192 tokens long.
|
| 11 |
- Samples are grouped into batches of 125 samples, totaling **1.024 million tokens per batch**.
|
| 12 |
-
- Each file (called a "block") contains
|
| 13 |
- All samples are tokenized using the `GPT2TokenizerFast` from Hugging Face Transformers.
|
| 14 |
|
| 15 |
## Structure
|
|
@@ -35,7 +35,7 @@ Tokens were drawn from a diverse mix of high-quality open datasets:
|
|
| 35 |
- `PAQ`
|
| 36 |
- `Natural Questions`
|
| 37 |
|
| 38 |
-
Each dataset was assigned a token quota
|
| 39 |
|
| 40 |
## Preprocessing & Packing Strategy
|
| 41 |
|
|
|
|
| 9 |
|
| 10 |
- Each `.bin` file contains a fixed number of samples, where each sample is exactly 8192 tokens long.
|
| 11 |
- Samples are grouped into batches of 125 samples, totaling **1.024 million tokens per batch**.
|
| 12 |
+
- Each file (called a "block") contains 62500 samples (approximately 512 million tokens).
|
| 13 |
- All samples are tokenized using the `GPT2TokenizerFast` from Hugging Face Transformers.
|
| 14 |
|
| 15 |
## Structure
|
|
|
|
| 35 |
- `PAQ`
|
| 36 |
- `Natural Questions`
|
| 37 |
|
| 38 |
+
Each dataset was assigned a token quota to ensure a balanced mix.
|
| 39 |
|
| 40 |
## Preprocessing & Packing Strategy
|
| 41 |
|