language:
- ar
license: other
task_categories:
- text-generation
arxiv: 2512.18834
configs:
- config_name: minhash_deduped
data_files:
- split: train
path: data/minhash_deduped/*
- config_name: matched
data_files:
- split: train
path: data/consensus/*
- config_name: sentence_deduped
data_files:
- split: train
path: data/sentence_deduped/*
default: minhash_deduped
AraMix (https://arxiv.org/abs/2512.18834) is an Arabic pretraining corpus containing 178 billion tokens across 179 million documents (in the minhash subset). Rather than scraping the web again, AraMix combines seven publicly available Arabic datasets, applies Arabic-specific quality filtering, and performs cross-dataset deduplication.
We train a 1.4B parameter language model through nanotron on 30 billion tokens to show that the consensus subset of AraMix outperforms the previous state-of-the-art, arabicweb24 (see Appendix A9 in the Fineweb-2 paper) while having more total tokens. Furthermore, the minhash_deduped subset performs on-par with nearly 5 times the total number of tokens.
In this setup, we remove all samples in consensus with more than 5 duplicates.
Subsets
| Subset | Documents | Tokens | Description |
|---|---|---|---|
sentence_deduped |
167.6M | 158.8B | MinHash + sentence-level deduplication |
minhash_deduped |
178.9M | 177.8B | Document-level MinHash deduplication only |
consensus |
47.9M | 54.1B | Documents appearing in 2+ source datasets |
The consensus subset uses cross-dataset agreement as a signal for quality.
Usage
from datasets import load_dataset
ds = load_dataset("AdaMLLab/AraMix", "sentence_deduped")
ds = load_dataset("AdaMLLab/AraMix", "minhash_deduped")
ds = load_dataset("AdaMLLab/AraMix", "matched")
Sources
Tokens were counted using meta-llama/Llama-3.2-3B's tokenizer
| Source | Tokens (Before) | Tokens (MinHash + Quality Filter) | Tokens (Sent-Dedup) |
|---|---|---|---|
| CulturaX | 87.4B (19.8%) | 42.1B (23.7%) | 38.4B (24.2%) |
| ArabicWeb24 | 40.7B (9.2%) | 35.4B (19.9%) | 31.6B (19.9%) |
| HPLT 2.0 | 108.4B (24.5%) | 34.7B (19.5%) | 30.4B (19.1%) |
| FineWeb-2 | 67.2B (15.2%) | 27.5B (15.5%) | 24.2B (15.2%) |
| C4 | 59.2B (13.4%) | 22.5B (12.7%) | 20.4B (12.9%) |
| 101B / ClusterLab | 49.9B (11.3%) | 9.5B (5.3%) | 7.7B (4.8%) |
| FinePDFs | 29.7B (6.7%) | 6.3B (3.5%) | 6.1B (3.8%) |
| Total | 442.5B (100%) | 177.8B (100%) | 158.8B (100%) |
Pipeline
- Quality filtering with Arabic-specific thresholds (terminal punctuation, repetition patterns, script ratio)
- Document-level MinHash deduplication (5-gram shingles, 14 bands, 8 hashes per bucket)
- Sentence-level deduplication (3-sentence spans, minimum 3 occurrences)
Citation
@misc{alrashed2025mixminmatch,
title={Mix, MinHash, and Match: Cross-Source Agreement for Multilingual Pretraining Datasets},
author={Sultan Alrashed and Francesco Orabona},
year={2025},
eprint={2512.18834v2},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.18834v2},
}
License
See individual source dataset licenses.