Ohamine's picture
Added dataset card (README.md) for the Genfusion training data as part of huggingscience effortswith deduplication details, usage, and licensing notes
609a351 verified
|
raw
history blame
5.37 kB
metadata
pretty_name: The Pile (Deduplicated)
tags:
  - text
  - language-modeling
  - text-generation
  - large-scale
  - deduplicated
  - eleutherai
  - huggingscience
  - science
license: other
task_categories:
  - text-generation
task_ids:
  - language-modeling
language:
  - en
size_categories:
  - 100M<n<1B
configs:
  - config_name: all
    split: train

The Pile (Deduplicated)

The Pile is an ~825 GiB diverse, open-source text corpus for training large language models, originally introduced by EleutherAI. It is a mixture of 22 high-quality component datasets spanning academic writing (e.g., arXiv), code, web content, books, QA, dialogues, and more.

This repository hosts the deduplicated variant: a copy of The Pile with exact and near-duplicate removal applied to reduce repeated content and limit memorization from duplicated passages.

Note on deduplication details: If you need precise parameters (e.g., hashing method, thresholds), please refer to the EleutherAI paper and associated documentation. This card focuses on practical usage and the metadata available in this repo.

Dataset Summary

  • Builder name: the_pile_deduped
  • Configuration: all
  • Split: train only
  • Examples: 134,318,121
  • Dataset text field: text (string)
  • Uncompressed size: 824,546,807,506 bytes (~824.5 GB)
  • Estimated download size: 451,079,111,579 bytes (~451.1 GB)

Figures above are taken from this repository’s dataset_infos.json.

What’s inside?

The original Pile aggregates 22 public sources (e.g., arXiv, PubMed Central, Books3, OpenWebText2, StackExchange, Wikipedia, Project Gutenberg, USPTO, etc.). This deduplicated release preserves the same composition while removing exact and near-duplicate documents across and within sources to reduce redundancy.

  • Primary language: English (with some multilingual spillover depending on components).
  • Use cases: Pretraining / continued pretraining for LLMs, experimentation with deduplication effects on generalization and memorization, large-scale language modeling research.

Supported Tasks and Benchmarks

  • Task category: text-generation
  • Task ID: language-modeling

Common uses include next-token prediction and self-supervised pretraining for decoder-only and encoder-decoder architectures. Downstream evaluation typically leverages standard LM benchmarks (e.g., perplexity on held-out corpora, zero-/few-shot tasks via prompting).

How to Use

Load with 🤗 Datasets (iterable streaming recommended)

from datasets import load_dataset

# # Load dataset in streaming mode to avoid storing 825 GB locally
ds = load_dataset("EleutherAI/the_pile_deduplicated", "all", split="train", streaming=True)

# Print the first three records
for i, row in enumerate(ds):
    if i < 3:
        print(row["text"][:200], "...\n")
    else:
        break

Local (non-streaming) load

Attention ⚠️ : Requires substantial disk (≈825 GB uncompressed) and RAM for shuffling/caching.

from datasets import load_dataset

ds = load_dataset("EleutherAI/the_pile_deduplicated", "all", split="train")
print(len(ds))
print(ds.features)

Data Format

Single field:

  • text (string): raw text documents from the mixture components.

Deduplication

This dataset removes exact and near duplicates compared to the original Pile to curb repeated passages and large blocks of identical content. Practically, you should expect:

  • Fewer exact duplicates and mirrored content.
  • Potential differences in token distributions compared to the non-deduplicated release.
  • Reduced risk of memorization from duplicated sources.

Precise deduplication method (e.g., hashing family, thresholds, pass order) isn’t provided in this repo’s metadata. Please consult the EleutherAI paper and release notes for authoritative parameters if you need exact reproducibility.

Splits

  • train: 134,318,121 examples (~824.5 GB)

No validation/test splits are provided. Users typically:

  • Create a custom validation set via random sampling or by evaluating on separate public corpora.
  • Track validation loss with held-out shards they set aside prior to training.

Licensing

Multiple licenses apply across the 22 component datasets. Before redistribution or commercial use, check the license of each component relevant to your use case. If in doubt, consult the original sources and the EleutherAI documentation.

Ethical Considerations & Limitations

  • Content variety: The Pile includes diverse web and document sources. Expect varying quality, styles, and potential biases.
  • Attribution & licensing: Ensure compliance with the licenses of component datasets when redistributing outputs or trained models.

Citation

If you use this dataset, please cite:

@misc{gao2020pile,
  title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
  author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
  year={2020},
  eprint={2101.00027},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}

Homepage

https://pile.eleuther.ai/