Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Datasets:
nikolina-p
/
gutenberg_clean_tokenized_en_splits

Tasks:
Text Generation
Summarization
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
Libraries:
Datasets
Dask
Dataset card Data Studio Files Files and versions
xet
Community
gutenberg_clean_tokenized_en_splits / data /validation
1.52 GB
  • 1 contributor
History: 1 commit
nikolina-p's picture
nikolina-p
Initial upload of deduplicated, cleaned and tokenized (tiktoken gpt2) streaming dataset - train/valid/test splits
d29979c 7 months ago
  • shard-033.parquet
    374 MB
    xet
    Initial upload of deduplicated, cleaned and tokenized (tiktoken gpt2) streaming dataset - train/valid/test splits 7 months ago
  • shard-034.parquet
    384 MB
    xet
    Initial upload of deduplicated, cleaned and tokenized (tiktoken gpt2) streaming dataset - train/valid/test splits 7 months ago
  • shard-035.parquet
    388 MB
    xet
    Initial upload of deduplicated, cleaned and tokenized (tiktoken gpt2) streaming dataset - train/valid/test splits 7 months ago
  • shard-036.parquet
    372 MB
    xet
    Initial upload of deduplicated, cleaned and tokenized (tiktoken gpt2) streaming dataset - train/valid/test splits 7 months ago