---
language:
- hi
license: other
task_categories:
- text-generation
arxiv: 2512.18834
configs:
- config_name: minhash_deduped
data_files:
- split: train
path: minhash_deduped/**/*.parquet
- config_name: quality_filtered
data_files:
- split: train
path: quality_filtered/**/*.parquet
- config_name: matched
data_files:
- split: train
path: consensus/*.parquet
default: minhash_deduped
---
HinMix ([https://arxiv.org/abs/2512.18834](https://arxiv.org/abs/2512.18834)) is a Hindi pretraining corpus containing 76 billion tokens across 60 million documents (in the minhash subset). Rather than scraping the web again, HinMix combines six publicly available Hindi datasets, applies Hindi-specific quality filtering, and performs cross-dataset deduplication.
We train a 1.4B parameter language model through nanotron on 30 billion tokens to show that HinMix outperforms the previous state-of-the-art, [CulturaX Hindi](https://huggingface.co/datasets/uonlp/CulturaX) (see [Appendix A9 in the Fineweb-2 paper](https://arxiv.org/pdf/2506.20920)). The `minhash_deduped` subset achieves an 11.6% relative improvement, while the `matched` subset achieves an 8.1% relative improvement.
## Subsets
| Subset | Documents | Tokens | Description |
|--------|-----------|--------|-------------|
| `quality_filtered` | 99.6M | 130.3B | Quality-filtered data before deduplication |
| `minhash_deduped` | 59.6M | 76.2B | Document-level MinHash deduplication |
| `matched` | 19.8M | 27.1B | Documents appearing in 2+ source datasets |
The matched subset uses cross-dataset agreement as a signal for quality.
## Usage
```python
from datasets import load_dataset
ds = load_dataset("AdaMLLab/HinMix", "minhash_deduped")
ds = load_dataset("AdaMLLab/HinMix", "quality_filtered")
ds = load_dataset("AdaMLLab/HinMix", "matched")
```
## Sources
Tokens were counted using `meta-llama/Llama-3.2-3B`'s tokenizer.
| Source | Tokens (MinHash) | Documents (MinHash) |
|--------|------------------|---------------------|
| FineWeb-2 | 20.0B | 17.1M |
| CulturaX | 16.6B | 11.5M |
| Sangraha (unverified) | 11.5B | 8.9M |
| HPLT 2.0 | 10.2B | 6.7M |
| Sangraha (verified) | 10.1B | 9.1M |
| C4 | 7.7B | 6.3M |
| **Total** | **76.2B** | **59.6M** |
## Pipeline
1. Quality filtering with Hindi-specific thresholds (Devanagari script ratio, repetition patterns, language identification)
2. Document-level MinHash deduplication (5-gram shingles, 14 bands, 8 hashes per band, similarity threshold 0.8)
3. Cross-source matching to identify documents appearing in 2+ independent sources
## Citation
```bib
@misc{alrashed2025mixminmatch,
title={Mix, MinHash, and Match: Cross-Source Agreement for Multilingual Pretraining Datasets},
author={Sultan Alrashed and Francesco Orabona},
year={2025},
eprint={2512.18834v2},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.18834v2},
}
```
## License
See individual source dataset licenses.