Update README.md
Browse files
README.md
CHANGED
|
@@ -9,12 +9,12 @@ size_categories:
|
|
| 9 |
- 10K<n<100K
|
| 10 |
---
|
| 11 |
|
| 12 |
-
# Dataset Card for Mini Project Gutenberg Dataset
|
| 13 |
|
| 14 |
|
| 15 |
This dataset is a mini subset of the dataset [nikolina-p/gutenberg_flat](nikolina-p/gutenberg_flat), created for **learning, testing streaming datasets, DDP training, and quick experimentation.**
|
| 16 |
|
| 17 |
-
It is made from the first 24 books. Its structure is adapted for training of autoregressive models in distributed environment .
|
| 18 |
|
| 19 |
|
| 20 |
# Usage
|
|
|
|
| 9 |
- 10K<n<100K
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# Dataset Card for Mini Project Gutenberg (Cleaned English Subset, Tokenized) Dataset
|
| 13 |
|
| 14 |
|
| 15 |
This dataset is a mini subset of the dataset [nikolina-p/gutenberg_flat](nikolina-p/gutenberg_flat), created for **learning, testing streaming datasets, DDP training, and quick experimentation.**
|
| 16 |
|
| 17 |
+
It is made from the first 24 books. The text is tokenized using OpenAI's tiktoken tokenizer. Its structure is adapted for training of autoregressive models in distributed environment: each split contains 8 shards, all shards within a split have the same number of tokens, and each row consists of 16×1,024 + 1 tokens.
|
| 18 |
|
| 19 |
|
| 20 |
# Usage
|