| | --- |
| | task_categories: |
| | - text-generation |
| | - summarization |
| | language: |
| | - en |
| | pretty_name: Project Gutenberg Cleaned (English Only) |
| | size_categories: |
| | - 10K<n<100K |
| | --- |
| | |
| | # Dataset Card for Mini Project Gutenberg TOKENIZED Dataset |
| |
|
| |
|
| | This dataset is a mini subset of the dataset [nikolina-p/gutenberg_clean_en](https://huggingface.co/datasets/nikolina-p/gutenberg_clean_en), with **tokenized** book texts using the tiktoken tokenizer gpt2 encoding. |
| | Total number of tokens is 2,110,010. |
| | It was created for **learning, testing streaming datasets, and quick downloading and manipulation**. |
| |
|
| | It is made from the first 24 books, which are randomly split into 39 shards, mirroring the structure of the original dataset. |
| |
|
| | The text of the books is randomly split into small chunks, allowing users to experiment with dataset operations on a smaller scale. |
| |
|
| | # Usage |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | ds = load_dataset("nikolina-p/mini_gutenberg_tokenized", split="train", streaming=True) |
| | print(next(iter(ds))) |
| | ``` |