Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Thai
ArXiv:
License:
File size: 2,355 Bytes
6436521
 
05d8ccd
 
6436521
05d8ccd
 
6436521
05d8ccd
 
 
 
 
 
 
 
 
6436521
 
05d8ccd
 
 
 
 
6436521
05d8ccd
6436521
05d8ccd
6436521
05d8ccd
 
 
 
6436521
05d8ccd
6436521
 
 
 
 
 
 
05d8ccd
6436521
 
05d8ccd
6436521
05d8ccd
6436521
05d8ccd
 
 
 
 
 
 
6436521
05d8ccd
6436521
05d8ccd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6436521
 
 
05d8ccd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
language:
- th
license: other
task_categories:
- text-generation
arxiv: 2512.18834
configs:
- config_name: minhash_deduped
  data_files:
  - split: train
    path: minhash_deduped/*.parquet
- config_name: matched
  data_files:
  - split: train
    path: consensus/*.parquet
default: minhash_deduped
---

<p align="center">
  <a href="https://huggingface.co/collections/AdaMLLab/mixminmatch">
    <img src="https://img.shields.io/badge/🤗_Collection-MixMinMatch-blue" alt="MixMinMatch Collection">
  </a>
</p>

ThaiMix ([https://arxiv.org/abs/2512.18834](https://arxiv.org/abs/2512.18834)) is a Thai pretraining corpus containing 70 billion tokens across 81 million documents (in the minhash subset). Rather than scraping the web again, ThaiMix combines five publicly available Thai datasets, applies Thai-specific quality filtering, and performs cross-dataset deduplication.

## Subsets

| Subset | Documents | Tokens | Description |
|--------|-----------|--------|-------------|
| `minhash_deduped` | 81.3M | 70.5B | Document-level MinHash deduplication |
| `matched` | 10.9M | 12.3B | Documents appearing in 2+ source datasets |

The matched subset uses cross-dataset agreement as a signal for quality.

## Usage

```python
from datasets import load_dataset

ds = load_dataset("AdaMLLab/ThaiMix", "minhash_deduped")
ds = load_dataset("AdaMLLab/ThaiMix", "matched")
```

## Sources

Tokens were counted using `meta-llama/Llama-3.2-3B`'s tokenizer.

| Source | Survival Rate |
|--------|---------------|
| HPLT 2.0 | 75.1% |
| C4 | 72.6% |
| SEA-CC | 70.7% |
| CulturaX | 69.0% |
| FineWeb-2 | 61.7% |

## Pipeline

1. Quality filtering with Thai-specific thresholds (Thai script ratio, repetition patterns, language identification)
2. Document-level MinHash deduplication (5-gram shingles, 14 bands, 8 hashes per band, similarity threshold 0.8)
3. Cross-source matching to identify documents appearing in 2+ independent sources

## Citation

```bib
@misc{alrashed2025mixminmatch,
      title={Mix, MinHash, and Match: Cross-Source Agreement for Multilingual Pretraining Datasets}, 
      author={Sultan Alrashed and Francesco Orabona},
      year={2025},
      eprint={2512.18834v2},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2512.18834v2}, 
}
```

## License

See individual source dataset licenses.