Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Datasets:
nikolina-p
/
gutenberg_clean_tokenized_en_splits
like
0
Tasks:
Text Generation
Summarization
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
Libraries:
Datasets
Dask
Croissant
+ 1
Dataset card
Data Studio
Files
Files and versions
xet
Community
main
gutenberg_clean_tokenized_en_splits
/
data
/
validation
1.52 GB
1 contributor
History:
1 commit
nikolina-p
Initial upload of deduplicated, cleaned and tokenized (tiktoken gpt2) streaming dataset - train/valid/test splits
d29979c
7 months ago
shard-033.parquet
Safe
374 MB
xet
Initial upload of deduplicated, cleaned and tokenized (tiktoken gpt2) streaming dataset - train/valid/test splits
7 months ago
shard-034.parquet
Safe
384 MB
xet
Initial upload of deduplicated, cleaned and tokenized (tiktoken gpt2) streaming dataset - train/valid/test splits
7 months ago
shard-035.parquet
Safe
388 MB
xet
Initial upload of deduplicated, cleaned and tokenized (tiktoken gpt2) streaming dataset - train/valid/test splits
7 months ago
shard-036.parquet
Safe
372 MB
xet
Initial upload of deduplicated, cleaned and tokenized (tiktoken gpt2) streaming dataset - train/valid/test splits
7 months ago