File size: 2,406 Bytes
64e62e0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1c2f22d
 
5a43d56
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: cc-by-4.0
language:
- tr
size_categories:
- 10M<n<100M
task_categories:
- text-generation
tags:
- pretraining
- turkish
- deduplication
- quality-filtered
configs:
- config_name: minhash_deduped
  data_files:
    - split: train
      path: "minhash_deduped/**/*.parquet"
- config_name: quality_filtered
  data_files:
    - split: train
      path: "quality_filtered/**/*.parquet"
- config_name: consensus
  data_files:
    - split: train
      path: "consensus/*.parquet"
---

# TurMix: Turkish Pretraining Data Mix

A high-quality Turkish pretraining dataset created by combining, filtering, and deduplicating multiple sources.

## Dataset Description

This dataset contains Turkish text from multiple web crawl sources, processed through a quality filtering and MinHash deduplication pipeline.

### Sources
- **C4** (mC4 Turkish subset)
- **CulturaX** (Turkish)
- **Fineweb-2** (tur_Latn)
- **HPLT-2** (tur_Latn, 5 shards)
- **VNGRS Web Corpus**

## Subsets

### 1. `minhash_deduped` (Recommended)
MinHash-deduplicated data. Each source was deduplicated individually to remove near-duplicate documents.

```python
from datasets import load_dataset
ds = load_dataset("AdaMLLab/TurMix", "minhash_deduped")
```

**Statistics:**
- ~27M documents
- 359GB compressed

### 2. `quality_filtered`
Quality-filtered data before deduplication. Use this if you want to apply your own deduplication.

```python
from datasets import load_dataset
ds = load_dataset("AdaMLLab/TurMix", "quality_filtered")
```

**Statistics:**
- ~49M documents
- 658GB compressed

### 3. `consensus`
Documents that appear in 2+ sources (exact text match). These are high-confidence documents verified across multiple crawls.

```python
from datasets import load_dataset
ds = load_dataset("AdaMLLab/TurMix", "consensus")
```

**Statistics:**
- 7.84M documents
- 13GB compressed

**Schema:**
- `text`: Document text
- `id`: Primary document ID
- `sources`: List of sources where document appears (e.g., `["c4", "culturax"]`)
- `all_ids`: All document IDs from all sources
- `metadata`: Additional metadata

## Quality Filtering

Documents were filtered based on:
- Language identification (Turkish Latin script ratio)
- Document length constraints
- Line quality metrics
- Repetition detection (including Turkish-specific patterns)
- Boilerplate/policy phrase removal

Filter thresholds based on Fineweb-2 Turkish configuration.