Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Arabic
ArXiv:
File size: 2,290 Bytes
afe902d
 
014d17b
 
 
 
 
 
 
 
 
 
 
 
afe902d
 
014d17b
afe902d
 
 
 
65e5536
afe902d
 
 
014d17b
 
 
 
 
afe902d
 
 
 
 
 
5a646fc
 
 
afe902d
 
014d17b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0c2087c
e242bb3
 
 
 
 
 
 
 
 
0c2087c
afe902d
014d17b
afe902d
014d17b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
configs:
- config_name: minhash_deduped
  data_files:
  - split: train
    path: data/minhash_deduped/*
- config_name: consensus
  data_files:
  - split: train
    path: data/consensus/*
- config_name: sentence_deduped
  data_files:
  - split: train
    path: data/sentence_deduped/*
default: minhash_deduped
language:
- ar
---

# AraMix

AraMix (https://arxiv.org/abs/2512.18834v1) is a deduplicated Arabic pretraining corpus containing 178 billion tokens across 179 million documents (in the minhash subset). Rather than scraping the web again, AraMix combines seven publicly available Arabic datasets, applies Arabic-specific quality filtering, and performs cross-dataset deduplication.

## Subsets

| Subset | Documents | Tokens | Description |
|--------|-----------|--------|-------------|
| `sentence_deduped` | 167.6M | 158.8B | MinHash + sentence-level deduplication |
| `minhash_deduped` | 178.9M | 177.8B | Document-level MinHash deduplication only |
| `consensus` | 47.9M | 54.1B | Documents appearing in 2+ source datasets |

## Usage

```python
from datasets import load_dataset

ds = load_dataset("AdaMLLab/AraMix", "sentence_deduped")
ds = load_dataset("AdaMLLab/AraMix", "minhash_deduped")
ds = load_dataset("AdaMLLab/AraMix", "consensus")
```

## Sources

| Source | Tokens | Documents |
|--------|--------|-----------|
| CulturaX | 38.4B | 40.8M |
| ArabicWeb24 | 31.6B | 33.6M |
| HPLT 2.0 | 30.4B | 33.1M |
| FineWeb-2 | 24.2B | 30.6M |
| C4 | 20.4B | 23.0M |
| ClusterLab 101B | 7.7B | 5.9M |
| FinePDFs | 6.1B | 648K |

## Pipeline

1. Quality filtering with Arabic-specific thresholds (terminal punctuation, repetition patterns, script ratio)
2. Document-level MinHash deduplication (5-gram shingles, 14 bands, 8 hashes per bucket)
3. Sentence-level deduplication (3-sentence spans, minimum 3 occurrences)

## Citation

```bib
@misc{alrashed2025aramixrecyclingrefilteringdeduplicating,
      title={AraMix: Recycling, Refiltering, and Deduplicating to Deliver the Largest Arabic Pretraining Corpus}, 
      author={Sultan Alrashed and Francesco Orabona},
      year={2025},
      eprint={2512.18834},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2512.18834}, 
}
```

## License

See individual source dataset licenses.