File size: 3,392 Bytes
df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 92d6227 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f df7be63 bf7a22f 1d669c5 bf7a22f 1d669c5 bf7a22f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
language:
- hi
license: other
task_categories:
- text-generation
arxiv: 2512.18834
configs:
- config_name: minhash_deduped
data_files:
- split: train
path: minhash_deduped/**/*.parquet
- config_name: quality_filtered
data_files:
- split: train
path: quality_filtered/**/*.parquet
- config_name: matched
data_files:
- split: train
path: consensus/*.parquet
default: minhash_deduped
---
<img src="https://huggingface.co/datasets/AdaMLLab/HinMix/resolve/main/finetasks_hindi_main_results.png" width="900" alt="Finetasks benchmark scores, showing HinMix-MinHash as SOTA.">
<p align="center">
<a href="https://huggingface.co/collections/AdaMLLab/mixminmatch">
<img src="https://img.shields.io/badge/🤗_Collection-MixMinMatch-blue" alt="MixMinMatch Collection">
</a>
</p>
HinMix ([https://arxiv.org/abs/2512.18834](https://arxiv.org/abs/2512.18834)) is a Hindi pretraining corpus containing 76 billion tokens across 60 million documents (in the minhash subset). Rather than scraping the web again, HinMix combines six publicly available Hindi datasets, applies Hindi-specific quality filtering, and performs cross-dataset deduplication.
We train a 1.4B parameter language model through nanotron on 30 billion tokens to show that HinMix outperforms the previous state-of-the-art, [CulturaX Hindi](https://huggingface.co/datasets/uonlp/CulturaX) (see [Appendix A9 in the Fineweb-2 paper](https://arxiv.org/pdf/2506.20920)). The `minhash_deduped` subset achieves an 11.6% relative improvement, while the `matched` subset achieves an 8.1% relative improvement.
## Subsets
| Subset | Documents | Tokens | Description |
|--------|-----------|--------|-------------|
| `quality_filtered` | 99.6M | 130.3B | Quality-filtered data before deduplication |
| `minhash_deduped` | 59.6M | 76.2B | Document-level MinHash deduplication |
| `matched` | 19.8M | 27.1B | Documents appearing in 2+ source datasets |
The matched subset uses cross-dataset agreement as a signal for quality.
## Usage
```python
from datasets import load_dataset
ds = load_dataset("AdaMLLab/HinMix", "minhash_deduped")
ds = load_dataset("AdaMLLab/HinMix", "quality_filtered")
ds = load_dataset("AdaMLLab/HinMix", "matched")
```
## Sources
Tokens were counted using `meta-llama/Llama-3.2-3B`'s tokenizer.
| Source | Tokens (MinHash) | Documents (MinHash) |
|--------|------------------|---------------------|
| FineWeb-2 | 20.0B | 17.1M |
| CulturaX | 16.6B | 11.5M |
| Sangraha (unverified) | 11.5B | 8.9M |
| HPLT 2.0 | 10.2B | 6.7M |
| Sangraha (verified) | 10.1B | 9.1M |
| C4 | 7.7B | 6.3M |
| **Total** | **76.2B** | **59.6M** |
## Pipeline
1. Quality filtering with Hindi-specific thresholds (Devanagari script ratio, repetition patterns, language identification)
2. Document-level MinHash deduplication (5-gram shingles, 14 bands, 8 hashes per band, similarity threshold 0.8)
3. Cross-source matching to identify documents appearing in 2+ independent sources
## Citation
```bib
@misc{alrashed2025mixminmatch,
title={Mix, MinHash, and Match: Cross-Source Agreement for Multilingual Pretraining Datasets},
author={Sultan Alrashed and Francesco Orabona},
year={2025},
eprint={2512.18834v2},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.18834v2},
}
```
## License
See individual source dataset licenses. |