Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Turkish
ArXiv:
License:
SultanR commited on
Commit
3fe1660
·
verified ·
1 Parent(s): c8c69af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -23,6 +23,12 @@ default: minhash_deduped
23
 
24
  <img src="https://huggingface.co/datasets/AdaMLLab/TurMix/resolve/main/finetasks_turkish_main_results.png" width="900" alt="Finetasks benchmark scores, showing TurMix-Matched as SOTA.">
25
 
 
 
 
 
 
 
26
  TurMix ([https://arxiv.org/abs/2512.18834](https://arxiv.org/abs/2512.18834)) is a Turkish pretraining corpus containing 168 billion tokens across 219 million documents (in the minhash subset). Rather than scraping the web again, TurMix combines five publicly available Turkish datasets, applies Turkish-specific quality filtering, and performs cross-dataset deduplication.
27
 
28
  We train a 1.4B parameter language model through nanotron on 30 billion tokens to show that the `matched` subset of TurMix outperforms the previous state-of-the-art, [FineWeb-2 Turkish](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) (see [Appendix A9 in the Fineweb-2 paper](https://arxiv.org/pdf/2506.20920)), achieving a 5.5% relative improvement. Furthermore, the `minhash_deduped` subset performs competitively with over 2× the total number of tokens.
 
23
 
24
  <img src="https://huggingface.co/datasets/AdaMLLab/TurMix/resolve/main/finetasks_turkish_main_results.png" width="900" alt="Finetasks benchmark scores, showing TurMix-Matched as SOTA.">
25
 
26
+ <p align="center">
27
+ <a href="https://huggingface.co/collections/AdaMLLab/mixminmatch">
28
+ <img src="https://img.shields.io/badge/🤗_Collection-MixMinMatch-blue" alt="MixMinMatch Collection">
29
+ </a>
30
+ </p>
31
+
32
  TurMix ([https://arxiv.org/abs/2512.18834](https://arxiv.org/abs/2512.18834)) is a Turkish pretraining corpus containing 168 billion tokens across 219 million documents (in the minhash subset). Rather than scraping the web again, TurMix combines five publicly available Turkish datasets, applies Turkish-specific quality filtering, and performs cross-dataset deduplication.
33
 
34
  We train a 1.4B parameter language model through nanotron on 30 billion tokens to show that the `matched` subset of TurMix outperforms the previous state-of-the-art, [FineWeb-2 Turkish](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) (see [Appendix A9 in the Fineweb-2 paper](https://arxiv.org/pdf/2506.20920)), achieving a 5.5% relative improvement. Furthermore, the `minhash_deduped` subset performs competitively with over 2× the total number of tokens.