Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Turkish
ArXiv:
License:
SultanR commited on
Commit
64e62e0
·
verified ·
1 Parent(s): 7fdc309

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - tr
5
+ size_categories:
6
+ - 10M<n<100M
7
+ task_categories:
8
+ - text-generation
9
+ tags:
10
+ - pretraining
11
+ - turkish
12
+ - deduplication
13
+ - quality-filtered
14
+ configs:
15
+ - config_name: minhash_deduped
16
+ data_files:
17
+ - split: train
18
+ path: "minhash_deduped/**/*.parquet"
19
+ - config_name: quality_filtered
20
+ data_files:
21
+ - split: train
22
+ path: "quality_filtered/**/*.parquet"
23
+ - config_name: consensus
24
+ data_files:
25
+ - split: train
26
+ path: "consensus/*.parquet"
27
+ ---
28
+
29
+ # TurMix: Turkish Pretraining Data Mix
30
+
31
+ A high-quality Turkish pretraining dataset created by combining, filtering, and deduplicating multiple sources.
32
+
33
+ ## Dataset Description
34
+
35
+ This dataset contains Turkish text from multiple web crawl sources, processed through a quality filtering and MinHash deduplication pipeline.
36
+
37
+ ### Sources
38
+ - **C4** (mC4 Turkish subset)
39
+ - **CulturaX** (Turkish)
40
+ - **Fineweb-2** (tur_Latn)
41
+ - **HPLT-2** (tur_Latn, 5 shards)
42
+ - **VNGRS Web Corpus**
43
+
44
+ ## Subsets
45
+
46
+ ### 1. `minhash_deduped` (Recommended)
47
+ MinHash-deduplicated data. Each source was deduplicated individually to remove near-duplicate documents.
48
+
49
+ ```python
50
+ from datasets import load_dataset
51
+ ds = load_dataset("AdaMLLab/TurMix", "minhash_deduped")
52
+ ```
53
+
54
+ **Statistics:**
55
+ - ~27M documents
56
+ - 359GB compressed
57
+
58
+ ### 2. `quality_filtered`
59
+ Quality-filtered data before deduplication. Use this if you want to apply your own deduplication.
60
+
61
+ ```python
62
+ from datasets import load_dataset
63
+ ds = load_dataset("AdaMLLab/TurMix", "quality_filtered")
64
+ ```
65
+
66
+ **Statistics:**
67
+ - ~49M documents
68
+ - 658GB compressed
69
+
70
+ ### 3. `consensus`
71
+ Documents that appear in 2+ sources (exact text match). These are high-confidence documents verified across multiple crawls.
72
+
73
+ ```python
74
+ from datasets import load_dataset
75
+ ds = load_dataset("AdaMLLab/TurMix", "consensus")
76
+ ```
77
+
78
+ **Statistics:**
79
+ - 7.84M documents
80
+ - 13GB compressed
81
+
82
+ **Schema:**
83
+ - `text`: Document text
84
+ - `id`: Primary document ID
85
+ - `sources`: List of sources where document appears (e.g., `["c4", "culturax"]`)
86
+ - `all_ids`: All document IDs from all sources
87
+ - `metadata`: Additional metadata
88
+
89
+ ## Quality Filtering
90
+
91
+ Documents were filtered based on:
92
+ - Language identification (Turkish Latin script ratio)
93
+ - Document length constraints
94
+ - Line quality metrics
95
+ - Repetition detection (including Turkish-specific patterns)
96
+ - Boilerplate/policy phrase removal
97
+
98
+ Filter thresholds based on Fineweb-2 Turkish configuration.
99
+
100
+ ## Citation
101
+
102
+ If you use this dataset, please cite:
103
+
104
+ ```bibtex
105
+ @dataset{turmix2024,
106
+ title={TurMix: Turkish Pretraining Data Mix},
107
+ author={AdaMLLab},
108
+ year={2024},
109
+ publisher={Hugging Face}
110
+ }
111
+ ```
112
+
113
+ ## License
114
+
115
+ This dataset is released under CC-BY-4.0. Individual source datasets may have their own licenses.