| | --- |
| | task_categories: |
| | - text-generation |
| | - summarization |
| | language: |
| | - en |
| | pretty_name: Project Gutenberg Cleaned (English Only) |
| | size_categories: |
| | - 10K<n<100K |
| | --- |
| | |
| | # Dataset Card for Mini Project Gutenberg Dataset |
| |
|
| |
|
| | This dataset is a mini subset of the dataset [nikolina-p/gutenberg_clean_en](https://huggingface.co/datasets/nikolina-p/gutenberg_clean_en), created for **learning, testing streaming datasets, and quick downloading and manipulation**. |
| |
|
| | It is made from the first 24 books, which are randomly split into 39 shards, mirroring the structure of the original dataset. |
| |
|
| | The text of the books is randomly split into small chunks, allowing users to experiment with dataset operations on a smaller scale. |
| |
|
| | **This dataset is identical to [nikolina-p/mini_gutenberg_splits](https://huggingface.co/datasets/nikolina-p/mini_gutenberg_splits) except for the split configuration.** |
| |
|
| | # Usage |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | ds = load_dataset("nikolina-p/mini_gutenberg", split="train", streaming=True) |
| | print(next(iter(ds))) |
| | ``` |