|
|
--- |
|
|
license: cc-by-4.0 |
|
|
language: |
|
|
- tr |
|
|
size_categories: |
|
|
- 10M<n<100M |
|
|
task_categories: |
|
|
- text-generation |
|
|
tags: |
|
|
- pretraining |
|
|
- turkish |
|
|
- deduplication |
|
|
- quality-filtered |
|
|
configs: |
|
|
- config_name: minhash_deduped |
|
|
data_files: |
|
|
- split: train |
|
|
path: "minhash_deduped/**/*.parquet" |
|
|
- config_name: quality_filtered |
|
|
data_files: |
|
|
- split: train |
|
|
path: "quality_filtered/**/*.parquet" |
|
|
- config_name: consensus |
|
|
data_files: |
|
|
- split: train |
|
|
path: "consensus/*.parquet" |
|
|
--- |
|
|
|
|
|
# TurMix: Turkish Pretraining Data Mix |
|
|
|
|
|
A high-quality Turkish pretraining dataset created by combining, filtering, and deduplicating multiple sources. |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
This dataset contains Turkish text from multiple web crawl sources, processed through a quality filtering and MinHash deduplication pipeline. |
|
|
|
|
|
### Sources |
|
|
- **C4** (mC4 Turkish subset) |
|
|
- **CulturaX** (Turkish) |
|
|
- **Fineweb-2** (tur_Latn) |
|
|
- **HPLT-2** (tur_Latn, 5 shards) |
|
|
- **VNGRS Web Corpus** |
|
|
|
|
|
## Subsets |
|
|
|
|
|
### 1. `minhash_deduped` (Recommended) |
|
|
MinHash-deduplicated data. Each source was deduplicated individually to remove near-duplicate documents. |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
ds = load_dataset("AdaMLLab/TurMix", "minhash_deduped") |
|
|
``` |
|
|
|
|
|
**Statistics:** |
|
|
- ~27M documents |
|
|
- 359GB compressed |
|
|
|
|
|
### 2. `quality_filtered` |
|
|
Quality-filtered data before deduplication. Use this if you want to apply your own deduplication. |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
ds = load_dataset("AdaMLLab/TurMix", "quality_filtered") |
|
|
``` |
|
|
|
|
|
**Statistics:** |
|
|
- ~49M documents |
|
|
- 658GB compressed |
|
|
|
|
|
### 3. `consensus` |
|
|
Documents that appear in 2+ sources (exact text match). These are high-confidence documents verified across multiple crawls. |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
ds = load_dataset("AdaMLLab/TurMix", "consensus") |
|
|
``` |
|
|
|
|
|
**Statistics:** |
|
|
- 7.84M documents |
|
|
- 13GB compressed |
|
|
|
|
|
**Schema:** |
|
|
- `text`: Document text |
|
|
- `id`: Primary document ID |
|
|
- `sources`: List of sources where document appears (e.g., `["c4", "culturax"]`) |
|
|
- `all_ids`: All document IDs from all sources |
|
|
- `metadata`: Additional metadata |
|
|
|
|
|
## Quality Filtering |
|
|
|
|
|
Documents were filtered based on: |
|
|
- Language identification (Turkish Latin script ratio) |
|
|
- Document length constraints |
|
|
- Line quality metrics |
|
|
- Repetition detection (including Turkish-specific patterns) |
|
|
- Boilerplate/policy phrase removal |
|
|
|
|
|
Filter thresholds based on Fineweb-2 Turkish configuration. |