nikolina-p's picture
Update README.md
2b9b1a7 verified
metadata
task_categories:
  - text-generation
  - summarization
language:
  - en
pretty_name: 'Mini Project Gutenberg Cleaned and in Splits (English Only) '
size_categories:
  - 10K<n<100K

Dataset Card for Mini Project Gutenberg Dataset

This dataset is a mini subset of the dataset nikolina-p/gutenberg_clean_en, created for learning, testing streaming datasets, and quick downloading and manipulation.

It is made from the first 24 books, which are randomly split into 39 shards, mirroring the structure of the original dataset.

The text of the books is randomly split into small chunks, allowing users to experiment with dataset operations on a smaller scale.

This dataset is identical to nikolina-p/mini_gutenberg except for the split configuration.

Dataset Splits

The dataset is divided into three splits:

  • Training: 85% of the data
  • Validation: 10% of the data
  • Test: 5% of the data

Usage

from datasets import load_dataset
ds = load_dataset("nikolina-p/mini_gutenberg_splits", split="train", streaming=True)
print(next(iter(ds)))