nikolina-p commited on
Commit
790cb26
·
verified ·
1 Parent(s): 1154dbc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ size_categories:
12
  # Dataset Card for Project Gutenberg (Cleaned English Subset, Tokenized) Dataset
13
 
14
  A cleaned and tokenized English-language subset of the Project Gutenberg dataset containing 38,026 books. Non-English texts, duplicates, and boilerplate license sections were removed for clarity and usability.
15
- The dataset was tokenized using the **OpenAI's tiktoken tokenizer**, and structured for **efficient streaming and distributed (DDP) training** — each split’s shards are balanced and contain an equal number of tokens. Each row includes 65,537 tokens (64×1,024+1), optimized for autoregressive modeling and batch packing.
16
 
17
 
18
 
 
12
  # Dataset Card for Project Gutenberg (Cleaned English Subset, Tokenized) Dataset
13
 
14
  A cleaned and tokenized English-language subset of the Project Gutenberg dataset containing 38,026 books. Non-English texts, duplicates, and boilerplate license sections were removed for clarity and usability.
15
+ The dataset was tokenized using the **OpenAI's tiktoken tokenizer**, and structured for **efficient streaming and distributed (DDP) training** — the number of shards per split is divisible by 8, each split’s shards are balanced and contain an equal number of tokens. Each row includes 65,537 tokens (64×1,024+1), optimized for autoregressive modeling and batch packing.
16
 
17
 
18