Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Arabic
ArXiv:
AraMix / README.md
SultanR's picture
Update README.md
5a646fc verified
metadata
configs:
  - config_name: minhash_deduped
    data_files:
      - split: train
        path: data/minhash_deduped/*
  - config_name: consensus
    data_files:
      - split: train
        path: data/consensus/*
  - config_name: sentence_deduped
    data_files:
      - split: train
        path: data/sentence_deduped/*
default: minhash_deduped
language:
  - ar

AraMix

AraMix (https://arxiv.org/abs/2512.18834v1) is a deduplicated Arabic pretraining corpus containing 178 billion tokens across 179 million documents (in the minhash subset). Rather than scraping the web again, AraMix combines seven publicly available Arabic datasets, applies Arabic-specific quality filtering, and performs cross-dataset deduplication.

Subsets

Subset Documents Tokens Description
sentence_deduped 167.6M 158.8B MinHash + sentence-level deduplication
minhash_deduped 178.9M 177.8B Document-level MinHash deduplication only
consensus 47.9M 54.1B Documents appearing in 2+ source datasets

Usage

from datasets import load_dataset

ds = load_dataset("AdaMLLab/AraMix", "sentence_deduped")
ds = load_dataset("AdaMLLab/AraMix", "minhash_deduped")
ds = load_dataset("AdaMLLab/AraMix", "consensus")

Sources

Source Tokens Documents
CulturaX 38.4B 40.8M
ArabicWeb24 31.6B 33.6M
HPLT 2.0 30.4B 33.1M
FineWeb-2 24.2B 30.6M
C4 20.4B 23.0M
ClusterLab 101B 7.7B 5.9M
FinePDFs 6.1B 648K

Pipeline

  1. Quality filtering with Arabic-specific thresholds (terminal punctuation, repetition patterns, script ratio)
  2. Document-level MinHash deduplication (5-gram shingles, 14 bands, 8 hashes per bucket)
  3. Sentence-level deduplication (3-sentence spans, minimum 3 occurrences)

Citation

@misc{alrashed2025aramixrecyclingrefilteringdeduplicating,
      title={AraMix: Recycling, Refiltering, and Deduplicating to Deliver the Largest Arabic Pretraining Corpus}, 
      author={Sultan Alrashed and Francesco Orabona},
      year={2025},
      eprint={2512.18834},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2512.18834}, 
}

License

See individual source dataset licenses.