mini_gutenberg_flat / README.md
nikolina-p's picture
Update README.md
5194ae3 verified
metadata
task_categories:
  - text-generation
  - summarization
language:
  - en
pretty_name: Mini Project Gutenberg (Cleaned English Subset, Tokenized) Dataset
size_categories:
  - 10K<n<100K

Dataset Card for Mini Project Gutenberg (Cleaned English Subset, Tokenized) Dataset

This dataset is a mini subset of the dataset nikolina-p/gutenberg_flat, created for learning, testing streaming datasets, DDP training, and quick experimentation.

It is made from the first 24 books. The text is tokenized using OpenAI's tiktoken tokenizer. Its structure is adapted for training of autoregressive models in distributed environment: each split contains 8 shards, all shards within a split have the same number of tokens, and each row consists of 16×1,024 + 1 tokens.

Total number of tokens: 2.359.440

  • train split: 2.097.280,
  • validation split: 262.160

Usage

from datasets import load_dataset
ds = load_dataset("nikolina-p/mini_gutenberg_flat", split="train", streaming=True)
print(next(iter(ds)))