Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ size_categories:
|
|
| 12 |
# Overview
|
| 13 |
This dataset is a tokenized version of the cleaned English-language subset of the Project Gutenberg Dataset [manu/project_gutenberg](https://huggingface.co/datasets/manu/project_gutenberg). It contains full-text books in English, free of boilerplate content and duplicates, and includes a pre-tokenized version of each book's content using the GPT-2 tokenizer (tiktoken.get_encoding("gpt2")).
|
| 14 |
|
| 15 |
-
**This dataset is identical to
|
| 16 |
|
| 17 |
# Cleaning and Preprocessing
|
| 18 |
This dataset builds on a prior cleaned version of the Project Gutenberg English split. The following steps were applied:
|
|
|
|
| 12 |
# Overview
|
| 13 |
This dataset is a tokenized version of the cleaned English-language subset of the Project Gutenberg Dataset [manu/project_gutenberg](https://huggingface.co/datasets/manu/project_gutenberg). It contains full-text books in English, free of boilerplate content and duplicates, and includes a pre-tokenized version of each book's content using the GPT-2 tokenizer (tiktoken.get_encoding("gpt2")).
|
| 14 |
|
| 15 |
+
**This dataset is identical to [nikolina-p/gutenberg_clean_tokenized_en](https://huggingface.co/datasets/nikolina-p/gutenberg_clean_tokenized_en) except for the split configuration.**
|
| 16 |
|
| 17 |
# Cleaning and Preprocessing
|
| 18 |
This dataset builds on a prior cleaned version of the Project Gutenberg English split. The following steps were applied:
|