Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ size_categories:
|
|
| 12 |
# Dataset Card for Project Gutenberg (Cleaned English Subset, Tokenized) Dataset
|
| 13 |
|
| 14 |
A cleaned and tokenized English-language subset of the Project Gutenberg dataset containing 38,026 books. Non-English texts, duplicates, and boilerplate license sections were removed for clarity and usability.
|
| 15 |
-
The dataset was tokenized using the **OpenAI's tiktoken tokenizer**, and structured for **efficient streaming and distributed (DDP) training** — each split’s shards are balanced and contain an equal number of tokens. Each row includes 65,537 tokens (64×1,024+1), optimized for autoregressive modeling and batch packing.
|
| 16 |
|
| 17 |
|
| 18 |
|
|
|
|
| 12 |
# Dataset Card for Project Gutenberg (Cleaned English Subset, Tokenized) Dataset
|
| 13 |
|
| 14 |
A cleaned and tokenized English-language subset of the Project Gutenberg dataset containing 38,026 books. Non-English texts, duplicates, and boilerplate license sections were removed for clarity and usability.
|
| 15 |
+
The dataset was tokenized using the **OpenAI's tiktoken tokenizer**, and structured for **efficient streaming and distributed (DDP) training** — the number of shards per split is divisible by 8, each split’s shards are balanced and contain an equal number of tokens. Each row includes 65,537 tokens (64×1,024+1), optimized for autoregressive modeling and batch packing.
|
| 16 |
|
| 17 |
|
| 18 |
|