Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -12,11 +12,11 @@ size_categories:
|
|
| 12 |
This dataset is a tokenized version of the cleaned English-language subset of the Project Gutenberg Dataset [manu/project_gutenberg](https://huggingface.co/datasets/manu/project_gutenberg). It contains full-text books in English, free of boilerplate content and duplicates, and includes a pre-tokenized version of each book's content using the GPT-2 tokenizer (tiktoken.get_encoding("gpt2")).
|
| 13 |
It contains 38.026 books.
|
| 14 |
|
| 15 |
-
**This dataset is identical to
|
| 16 |
|
| 17 |
|
| 18 |
# Cleaning and Preprocessing
|
| 19 |
-
This dataset builds on a prior cleaned version of the Project Gutenberg English split. The following steps were applied:
|
| 20 |
|
| 21 |
✅ Filtered only the English split from the original dataset (config='en')
|
| 22 |
|
|
|
|
| 12 |
This dataset is a tokenized version of the cleaned English-language subset of the Project Gutenberg Dataset [manu/project_gutenberg](https://huggingface.co/datasets/manu/project_gutenberg). It contains full-text books in English, free of boilerplate content and duplicates, and includes a pre-tokenized version of each book's content using the GPT-2 tokenizer (tiktoken.get_encoding("gpt2")).
|
| 13 |
It contains 38.026 books.
|
| 14 |
|
| 15 |
+
**This dataset is identical to [nikolina-p/gutenberg_clean_tokenized_en_splits](https://huggingface.co/datasets/nikolina-p/gutenberg_clean_tokenized_en_splits) except for the split configuration.**
|
| 16 |
|
| 17 |
|
| 18 |
# Cleaning and Preprocessing
|
| 19 |
+
This dataset builds on a prior cleaned version of the Project Gutenberg's English split. The following steps were applied:
|
| 20 |
|
| 21 |
✅ Filtered only the English split from the original dataset (config='en')
|
| 22 |
|