Added Dataset card for Pile Deduplicated training data
#4
by
Ohamine
- opened
README.md
ADDED
|
@@ -0,0 +1,162 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pretty_name: The Pile (Deduplicated)
|
| 3 |
+
tags:
|
| 4 |
+
- text
|
| 5 |
+
- language-modeling
|
| 6 |
+
- text-generation
|
| 7 |
+
- large-scale
|
| 8 |
+
- deduplicated
|
| 9 |
+
- eleutherai
|
| 10 |
+
- huggingscience
|
| 11 |
+
- science
|
| 12 |
+
license: other
|
| 13 |
+
task_categories:
|
| 14 |
+
- text-generation
|
| 15 |
+
task_ids:
|
| 16 |
+
- language-modeling
|
| 17 |
+
language:
|
| 18 |
+
- en
|
| 19 |
+
size_categories:
|
| 20 |
+
- 100M<n<1B
|
| 21 |
+
configs:
|
| 22 |
+
- config_name: all
|
| 23 |
+
split: train
|
| 24 |
+
---
|
| 25 |
+
|
| 26 |
+
# The Pile (Deduplicated)
|
| 27 |
+
|
| 28 |
+
**The Pile** is an ~825 GiB diverse, open-source text corpus for training large language models, originally introduced by EleutherAI. It is a mixture of **22** high-quality component datasets spanning academic writing (e.g., arXiv), code, web content, books, QA, dialogues, and more.
|
| 29 |
+
|
| 30 |
+
**This repository hosts the _deduplicated_ variant**: a copy of The Pile with **exact** and **near-duplicate** removal applied to reduce repeated content and limit memorization from duplicated passages.
|
| 31 |
+
|
| 32 |
+
> **Note on deduplication details:** If you need precise parameters (e.g., hashing method, thresholds), please refer to the EleutherAI paper and associated documentation. This card focuses on practical usage and the metadata available in this repo.
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
## Dataset Summary
|
| 36 |
+
|
| 37 |
+
- **Builder name:** `the_pile_deduped`
|
| 38 |
+
- **Configuration:** `all`
|
| 39 |
+
- **Split:** `train` only
|
| 40 |
+
- **Examples:** `134,318,121`
|
| 41 |
+
- **Dataset text field:** `text` (string)
|
| 42 |
+
- **Uncompressed size:** `824,546,807,506` bytes (~824.5 GB)
|
| 43 |
+
- **Estimated download size:** `451,079,111,579` bytes (~451.1 GB)
|
| 44 |
+
|
| 45 |
+
> Figures above are taken from this repository’s `dataset_infos.json`.
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
## What’s inside?
|
| 50 |
+
|
| 51 |
+
The original Pile aggregates 22 public sources (e.g., arXiv, PubMed Central, Books3, OpenWebText2, StackExchange, Wikipedia, Project Gutenberg, USPTO, etc.). This deduplicated release preserves the same composition while removing exact and near-duplicate documents across and within sources to reduce redundancy.
|
| 52 |
+
|
| 53 |
+
- **Primary language:** English (with some multilingual spillover depending on components).
|
| 54 |
+
- **Use cases:** Pretraining / continued pretraining for LLMs, experimentation with deduplication effects on generalization and memorization, large-scale language modeling research.
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
## Supported Tasks and Benchmarks
|
| 59 |
+
|
| 60 |
+
- **Task category:** `text-generation`
|
| 61 |
+
- **Task ID:** `language-modeling`
|
| 62 |
+
|
| 63 |
+
Common uses include next-token prediction and self-supervised pretraining for decoder-only and encoder-decoder architectures. Downstream evaluation typically leverages standard LM benchmarks (e.g., perplexity on held-out corpora, zero-/few-shot tasks via prompting).
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
## How to Use
|
| 68 |
+
|
| 69 |
+
### Load with 🤗 Datasets (iterable streaming recommended)
|
| 70 |
+
|
| 71 |
+
```python
|
| 72 |
+
from datasets import load_dataset
|
| 73 |
+
|
| 74 |
+
# # Load dataset in streaming mode to avoid storing 825 GB locally
|
| 75 |
+
ds = load_dataset("EleutherAI/the_pile_deduplicated", "all", split="train", streaming=True)
|
| 76 |
+
|
| 77 |
+
# Print the first three records
|
| 78 |
+
for i, row in enumerate(ds):
|
| 79 |
+
if i < 3:
|
| 80 |
+
print(row["text"][:200], "...\n")
|
| 81 |
+
else:
|
| 82 |
+
break
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
### Local (non-streaming) load
|
| 86 |
+
|
| 87 |
+
Attention ⚠️ : Requires substantial disk (≈825 GB uncompressed) and RAM for shuffling/caching.
|
| 88 |
+
|
| 89 |
+
```python
|
| 90 |
+
from datasets import load_dataset
|
| 91 |
+
|
| 92 |
+
ds = load_dataset("EleutherAI/the_pile_deduplicated", "all", split="train")
|
| 93 |
+
print(len(ds))
|
| 94 |
+
print(ds.features)
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
## Data Format
|
| 100 |
+
|
| 101 |
+
**Single field:**
|
| 102 |
+
|
| 103 |
+
- `text` (`string`): raw text documents from the mixture components.
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
|
| 107 |
+
## Deduplication
|
| 108 |
+
|
| 109 |
+
This dataset removes **exact** and **near duplicates** compared to the original Pile to curb repeated passages and large blocks of identical content. Practically, you should expect:
|
| 110 |
+
|
| 111 |
+
- Fewer exact duplicates and mirrored content.
|
| 112 |
+
- Potential differences in token distributions compared to the non-deduplicated release.
|
| 113 |
+
- Reduced risk of memorization from duplicated sources.
|
| 114 |
+
|
| 115 |
+
> Precise deduplication method (e.g., hashing family, thresholds, pass order) isn’t provided in this repo’s metadata. Please consult the EleutherAI paper and release notes for authoritative parameters if you need exact reproducibility.
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
## Splits
|
| 120 |
+
|
| 121 |
+
- **train:** 134,318,121 examples (~824.5 GB)
|
| 122 |
+
|
| 123 |
+
No validation/test splits are provided. Users typically:
|
| 124 |
+
|
| 125 |
+
- Create a **custom validation set** via random sampling or by evaluating on separate public corpora.
|
| 126 |
+
- Track validation loss with **held-out shards** they set aside prior to training.
|
| 127 |
+
|
| 128 |
+
|
| 129 |
+
|
| 130 |
+
## Licensing
|
| 131 |
+
|
| 132 |
+
Multiple licenses apply across the 22 component datasets. Before redistribution or commercial use, **check the license of each component** relevant to your use case. If in doubt, consult the original sources and the EleutherAI documentation.
|
| 133 |
+
|
| 134 |
+
|
| 135 |
+
|
| 136 |
+
## Ethical Considerations & Limitations
|
| 137 |
+
|
| 138 |
+
- **Content variety:** The Pile includes diverse web and document sources. Expect varying quality, styles, and potential biases.
|
| 139 |
+
- **Attribution & licensing:** Ensure compliance with the licenses of component datasets when redistributing outputs or trained models.
|
| 140 |
+
|
| 141 |
+
|
| 142 |
+
|
| 143 |
+
## Citation
|
| 144 |
+
|
| 145 |
+
If you use this dataset, please cite:
|
| 146 |
+
|
| 147 |
+
```bibtex
|
| 148 |
+
@misc{gao2020pile,
|
| 149 |
+
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
|
| 150 |
+
author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
|
| 151 |
+
year={2020},
|
| 152 |
+
eprint={2101.00027},
|
| 153 |
+
archivePrefix={arXiv},
|
| 154 |
+
primaryClass={cs.CL}
|
| 155 |
+
}
|
| 156 |
+
```
|
| 157 |
+
|
| 158 |
+
|
| 159 |
+
|
| 160 |
+
## Homepage
|
| 161 |
+
|
| 162 |
+
[https://pile.eleuther.ai/](https://pile.eleuther.ai/)
|