File size: 5,374 Bytes
609a351
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
---
pretty_name: The Pile (Deduplicated)
tags:
- text
- language-modeling
- text-generation
- large-scale
- deduplicated
- eleutherai
- huggingscience
- science
license: other
task_categories:
- text-generation
task_ids:
- language-modeling
language:
- en
size_categories:
- 100M<n<1B
configs:
- config_name: all
  split: train
---

# The Pile (Deduplicated)

**The Pile** is an ~825 GiB diverse, open-source text corpus for training large language models, originally introduced by EleutherAI. It is a mixture of **22** high-quality component datasets spanning academic writing (e.g., arXiv), code, web content, books, QA, dialogues, and more.

**This repository hosts the _deduplicated_ variant**: a copy of The Pile with **exact** and **near-duplicate** removal applied to reduce repeated content and limit memorization from duplicated passages.

> **Note on deduplication details:** If you need precise parameters (e.g., hashing method, thresholds), please refer to the EleutherAI paper and associated documentation. This card focuses on practical usage and the metadata available in this repo.


## Dataset Summary

- **Builder name:** `the_pile_deduped`  
- **Configuration:** `all`  
- **Split:** `train` only  
- **Examples:** `134,318,121`  
- **Dataset text field:** `text` (string)  
- **Uncompressed size:** `824,546,807,506` bytes (~824.5 GB)  
- **Estimated download size:** `451,079,111,579` bytes (~451.1 GB)

> Figures above are taken from this repository’s `dataset_infos.json`.



## What’s inside?

The original Pile aggregates 22 public sources (e.g., arXiv, PubMed Central, Books3, OpenWebText2, StackExchange, Wikipedia, Project Gutenberg, USPTO, etc.). This deduplicated release preserves the same composition while removing exact and near-duplicate documents across and within sources to reduce redundancy.

- **Primary language:** English (with some multilingual spillover depending on components).
- **Use cases:** Pretraining / continued pretraining for LLMs, experimentation with deduplication effects on generalization and memorization, large-scale language modeling research.



## Supported Tasks and Benchmarks

- **Task category:** `text-generation`
- **Task ID:** `language-modeling`

Common uses include next-token prediction and self-supervised pretraining for decoder-only and encoder-decoder architectures. Downstream evaluation typically leverages standard LM benchmarks (e.g., perplexity on held-out corpora, zero-/few-shot tasks via prompting).



## How to Use

### Load with 🤗 Datasets (iterable streaming recommended)

```python
from datasets import load_dataset

# # Load dataset in streaming mode to avoid storing 825 GB locally
ds = load_dataset("EleutherAI/the_pile_deduplicated", "all", split="train", streaming=True)

# Print the first three records
for i, row in enumerate(ds):
    if i < 3:
        print(row["text"][:200], "...\n")
    else:
        break
```

### Local (non-streaming) load

Attention ⚠️ : Requires substantial disk (≈825 GB uncompressed) and RAM for shuffling/caching.

```python
from datasets import load_dataset

ds = load_dataset("EleutherAI/the_pile_deduplicated", "all", split="train")
print(len(ds))
print(ds.features)
```



## Data Format

**Single field:**

- `text` (`string`): raw text documents from the mixture components.



## Deduplication

This dataset removes **exact** and **near duplicates** compared to the original Pile to curb repeated passages and large blocks of identical content. Practically, you should expect:

- Fewer exact duplicates and mirrored content.  
- Potential differences in token distributions compared to the non-deduplicated release.  
- Reduced risk of memorization from duplicated sources.  

> Precise deduplication method (e.g., hashing family, thresholds, pass order) isn’t provided in this repo’s metadata. Please consult the EleutherAI paper and release notes for authoritative parameters if you need exact reproducibility.



## Splits

- **train:** 134,318,121 examples (~824.5 GB)

No validation/test splits are provided. Users typically:

- Create a **custom validation set** via random sampling or by evaluating on separate public corpora.  
- Track validation loss with **held-out shards** they set aside prior to training.  



## Licensing

Multiple licenses apply across the 22 component datasets. Before redistribution or commercial use, **check the license of each component** relevant to your use case. If in doubt, consult the original sources and the EleutherAI documentation.



## Ethical Considerations & Limitations

- **Content variety:** The Pile includes diverse web and document sources. Expect varying quality, styles, and potential biases.
- **Attribution & licensing:** Ensure compliance with the licenses of component datasets when redistributing outputs or trained models.  



## Citation

If you use this dataset, please cite:

```bibtex
@misc{gao2020pile,
  title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
  author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
  year={2020},
  eprint={2101.00027},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}
```



## Homepage

[https://pile.eleuther.ai/](https://pile.eleuther.ai/)