Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Turkish
ArXiv:
License:
SultanR commited on
Commit
5d50c18
·
verified ·
1 Parent(s): 9e68e0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -72
README.md CHANGED
@@ -1,98 +1,79 @@
1
  ---
2
- license: cc-by-4.0
3
  language:
4
  - tr
5
- size_categories:
6
- - 10M<n<100M
7
  task_categories:
8
  - text-generation
9
- tags:
10
- - pretraining
11
- - turkish
12
- - deduplication
13
- - quality-filtered
14
  configs:
15
  - config_name: minhash_deduped
16
  data_files:
17
- - split: train
18
- path: "minhash_deduped/**/*.parquet"
19
- - config_name: quality_filtered
20
  data_files:
21
- - split: train
22
- path: "quality_filtered/**/*.parquet"
23
- - config_name: consensus
24
- data_files:
25
- - split: train
26
- path: "consensus/*.parquet"
27
  ---
28
 
29
- # TurMix: Turkish Pretraining Data Mix
30
-
31
- A high-quality Turkish pretraining dataset created by combining, filtering, and deduplicating multiple sources.
32
-
33
- ## Dataset Description
34
 
35
- This dataset contains Turkish text from multiple web crawl sources, processed through a quality filtering and MinHash deduplication pipeline.
36
 
37
- ### Sources
38
- - **C4** (mC4 Turkish subset)
39
- - **CulturaX** (Turkish)
40
- - **Fineweb-2** (tur_Latn)
41
- - **HPLT-2** (tur_Latn, 5 shards)
42
- - **VNGRS Web Corpus**
43
 
44
  ## Subsets
45
 
46
- ### 1. `minhash_deduped` (Recommended)
47
- MinHash-deduplicated data. Each source was deduplicated individually to remove near-duplicate documents.
 
 
48
 
49
- ```python
50
- from datasets import load_dataset
51
- ds = load_dataset("AdaMLLab/TurMix", "minhash_deduped")
52
- ```
53
 
54
- **Statistics:**
55
- - ~27M documents
56
- - 359GB compressed
57
-
58
- ### 2. `quality_filtered`
59
- Quality-filtered data before deduplication. Use this if you want to apply your own deduplication.
60
 
61
  ```python
62
  from datasets import load_dataset
63
- ds = load_dataset("AdaMLLab/TurMix", "quality_filtered")
64
- ```
65
 
66
- **Statistics:**
67
- - ~49M documents
68
- - 658GB compressed
69
-
70
- ### 3. `consensus`
71
- Documents that appear in 2+ sources (exact text match). These are high-confidence documents verified across multiple crawls.
72
-
73
- ```python
74
- from datasets import load_dataset
75
- ds = load_dataset("AdaMLLab/TurMix", "consensus")
76
  ```
77
 
78
- **Statistics:**
79
- - 7.84M documents
80
- - 13GB compressed
81
-
82
- **Schema:**
83
- - `text`: Document text
84
- - `id`: Primary document ID
85
- - `sources`: List of sources where document appears (e.g., `["c4", "culturax"]`)
86
- - `all_ids`: All document IDs from all sources
87
- - `metadata`: Additional metadata
88
-
89
- ## Quality Filtering
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
 
91
- Documents were filtered based on:
92
- - Language identification (Turkish Latin script ratio)
93
- - Document length constraints
94
- - Line quality metrics
95
- - Repetition detection (including Turkish-specific patterns)
96
- - Boilerplate/policy phrase removal
97
 
98
- Filter thresholds based on Fineweb-2 Turkish configuration.
 
1
  ---
 
2
  language:
3
  - tr
4
+ license: other
 
5
  task_categories:
6
  - text-generation
7
+ arxiv: 2512.18834
 
 
 
 
8
  configs:
9
  - config_name: minhash_deduped
10
  data_files:
11
+ - split: train
12
+ path: minhash_deduped/**/*.parquet
13
+ - config_name: matched
14
  data_files:
15
+ - split: train
16
+ path: consensus/*.parquet
17
+ default: minhash_deduped
 
 
 
18
  ---
19
 
20
+ <img src="https://huggingface.co/datasets/AdaMLLab/TurMix/resolve/main/finetasks_turkish_main_results.png" width="900" alt="Finetasks benchmark scores, showing TurMix-Matched as SOTA.">
 
 
 
 
21
 
22
+ TurMix ([https://arxiv.org/abs/2512.18834](https://arxiv.org/abs/2512.18834)) is a Turkish pretraining corpus containing 168 billion tokens across 219 million documents (in the minhash subset). Rather than scraping the web again, TurMix combines five publicly available Turkish datasets, applies Turkish-specific quality filtering, and performs cross-dataset deduplication.
23
 
24
+ We train a 1.4B parameter language model through nanotron on 30 billion tokens to show that the `matched` subset of TurMix outperforms the previous state-of-the-art, [FineWeb-2 Turkish](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) (see [Appendix A9 in the Fineweb-2 paper](https://arxiv.org/pdf/2506.20920)), achieving a 5.5% relative improvement. Furthermore, the `minhash_deduped` subset performs competitively with over 2× the total number of tokens.
 
 
 
 
 
25
 
26
  ## Subsets
27
 
28
+ | Subset | Documents | Tokens | Description |
29
+ |--------|-----------|--------|-------------|
30
+ | `minhash_deduped` | 219.1M | 167.6B | Document-level MinHash deduplication only |
31
+ | `matched` | 67.6M | 56.0B | Documents appearing in 2+ source datasets |
32
 
33
+ The matched subset uses cross-dataset agreement as a signal for quality.
 
 
 
34
 
35
+ ## Usage
 
 
 
 
 
36
 
37
  ```python
38
  from datasets import load_dataset
 
 
39
 
40
+ ds = load_dataset("AdaMLLab/TurMix", "minhash_deduped")
41
+ ds = load_dataset("AdaMLLab/TurMix", "matched")
 
 
 
 
 
 
 
 
42
  ```
43
 
44
+ ## Sources
45
+
46
+ Tokens were counted using `meta-llama/Llama-3.2-3B`'s tokenizer.
47
+
48
+ | Source | Tokens (MinHash) | Documents (MinHash) |
49
+ |--------|------------------|---------------------|
50
+ | HPLT 2.0 | 46.0B | 53.7M |
51
+ | FineWeb-2 | 41.9B | 54.5M |
52
+ | CulturaX | 35.8B | 47.9M |
53
+ | C4 | 25.3B | 36.5M |
54
+ | VNGRS-Web | 18.7B | 26.5M |
55
+ | **Total** | **167.6B** | **219.1M** |
56
+
57
+ ## Pipeline
58
+
59
+ 1. Quality filtering with Turkish-specific thresholds (terminal punctuation, repetition patterns, Latin script ratio, language identification)
60
+ 2. Document-level MinHash deduplication (5-gram shingles, 14 bands, 8 hashes per band, similarity threshold 0.8)
61
+ 3. Cross-source matching to identify documents appearing in 2+ independent sources
62
+
63
+ ## Citation
64
+
65
+ ```bib
66
+ @misc{alrashed2025mixminmatch,
67
+ title={Mix, MinHash, and Match: Cross-Source Agreement for Multilingual Pretraining Datasets},
68
+ author={Sultan Alrashed and Francesco Orabona},
69
+ year={2025},
70
+ eprint={2512.18834v2},
71
+ archivePrefix={arXiv},
72
+ primaryClass={cs.CL},
73
+ url={https://arxiv.org/abs/2512.18834v2},
74
+ }
75
+ ```
76
 
77
+ ## License
 
 
 
 
 
78
 
79
+ See individual source dataset licenses.