File size: 1,043 Bytes
b085e9c
 
 
 
 
 
 
 
 
 
 
88e6031
b085e9c
 
 
 
88e6031
b085e9c
5194ae3
 
 
b085e9c
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
task_categories:
- text-generation
- summarization
language:
- en
pretty_name: Mini Project Gutenberg (Cleaned English Subset, Tokenized) Dataset
size_categories:
- 10K<n<100K
---

# Dataset Card for Mini Project Gutenberg (Cleaned English Subset, Tokenized) Dataset


This dataset is a mini subset of the dataset [nikolina-p/gutenberg_flat](nikolina-p/gutenberg_flat), created for **learning, testing streaming datasets, DDP training, and quick experimentation.**

It is made from the first 24 books. The text is tokenized using OpenAI's tiktoken tokenizer. Its structure is adapted for training of autoregressive models in distributed environment: each split contains 8 shards, all shards within a split have the same number of tokens, and each row consists of 16×1,024 + 1 tokens. 

Total number of tokens: 2.359.440
 - train split: 2.097.280,
 - validation split: 262.160

# Usage

```python
from datasets import load_dataset
ds = load_dataset("nikolina-p/mini_gutenberg_flat", split="train", streaming=True)
print(next(iter(ds)))
```