nikolina-p commited on
Commit
1cbc158
·
verified ·
1 Parent(s): bded2e8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -13,6 +13,7 @@ size_categories:
13
 
14
 
15
  This dataset is a mini subset of the dataset [nikolina-p/gutenberg_clean_en](https://huggingface.co/datasets/nikolina-p/gutenberg_clean_en), with **tokenized** book texts using the tiktoken tokenizer gpt2 encoding.
 
16
  It was created for **learning, testing streaming datasets, and quick downloading and manipulation**.
17
 
18
  It is made from the first 24 books, which are randomly split into 39 shards, mirroring the structure of the original dataset.
 
13
 
14
 
15
  This dataset is a mini subset of the dataset [nikolina-p/gutenberg_clean_en](https://huggingface.co/datasets/nikolina-p/gutenberg_clean_en), with **tokenized** book texts using the tiktoken tokenizer gpt2 encoding.
16
+ Total number of tokens is 2,110,010.
17
  It was created for **learning, testing streaming datasets, and quick downloading and manipulation**.
18
 
19
  It is made from the first 24 books, which are randomly split into 39 shards, mirroring the structure of the original dataset.