Update README.md
Browse files
README.md
CHANGED
|
@@ -23,6 +23,12 @@ default: minhash_deduped
|
|
| 23 |
|
| 24 |
<img src="https://huggingface.co/datasets/AdaMLLab/HinMix/resolve/main/finetasks_hindi_main_results.png" width="900" alt="Finetasks benchmark scores, showing HinMix-MinHash as SOTA.">
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
HinMix ([https://arxiv.org/abs/2512.18834](https://arxiv.org/abs/2512.18834)) is a Hindi pretraining corpus containing 76 billion tokens across 60 million documents (in the minhash subset). Rather than scraping the web again, HinMix combines six publicly available Hindi datasets, applies Hindi-specific quality filtering, and performs cross-dataset deduplication.
|
| 27 |
|
| 28 |
We train a 1.4B parameter language model through nanotron on 30 billion tokens to show that HinMix outperforms the previous state-of-the-art, [CulturaX Hindi](https://huggingface.co/datasets/uonlp/CulturaX) (see [Appendix A9 in the Fineweb-2 paper](https://arxiv.org/pdf/2506.20920)). The `minhash_deduped` subset achieves an 11.6% relative improvement, while the `matched` subset achieves an 8.1% relative improvement.
|
|
|
|
| 23 |
|
| 24 |
<img src="https://huggingface.co/datasets/AdaMLLab/HinMix/resolve/main/finetasks_hindi_main_results.png" width="900" alt="Finetasks benchmark scores, showing HinMix-MinHash as SOTA.">
|
| 25 |
|
| 26 |
+
<p align="center">
|
| 27 |
+
<a href="https://huggingface.co/collections/AdaMLLab/mixminmatch">
|
| 28 |
+
<img src="https://img.shields.io/badge/🤗_Collection-MixMinMatch-blue" alt="MixMinMatch Collection">
|
| 29 |
+
</a>
|
| 30 |
+
</p>
|
| 31 |
+
|
| 32 |
HinMix ([https://arxiv.org/abs/2512.18834](https://arxiv.org/abs/2512.18834)) is a Hindi pretraining corpus containing 76 billion tokens across 60 million documents (in the minhash subset). Rather than scraping the web again, HinMix combines six publicly available Hindi datasets, applies Hindi-specific quality filtering, and performs cross-dataset deduplication.
|
| 33 |
|
| 34 |
We train a 1.4B parameter language model through nanotron on 30 billion tokens to show that HinMix outperforms the previous state-of-the-art, [CulturaX Hindi](https://huggingface.co/datasets/uonlp/CulturaX) (see [Appendix A9 in the Fineweb-2 paper](https://arxiv.org/pdf/2506.20920)). The `minhash_deduped` subset achieves an 11.6% relative improvement, while the `matched` subset achieves an 8.1% relative improvement.
|