Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Thai
ArXiv:
License:
SultanR commited on
Commit
05d8ccd
·
verified ·
1 Parent(s): 6436521

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -60
README.md CHANGED
@@ -1,86 +1,80 @@
1
  ---
2
  language:
3
- - th
4
- license: apache-2.0
5
  task_categories:
6
- - text-generation
7
- - fill-mask
8
- tags:
9
- - thai
10
- - pretraining
11
- - deduplicated
12
- - minhash
13
- - consensus
14
- pretty_name: ThaiMix
15
- size_categories:
16
- - 10M<n<100M
17
  configs:
18
- - config_name: consensus
19
- data_files:
20
- - split: train
21
- path: consensus/*.parquet
22
- - config_name: minhash_deduped
23
- data_files:
24
- - split: train
25
- path: minhash_deduped/*.parquet
26
- default: true
27
  ---
28
 
29
- # ThaiMix
 
 
 
 
30
 
31
- A high-quality Thai language pretraining dataset created by combining and deduplicating multiple sources.
32
 
33
- ## Dataset Description
34
 
35
- This dataset contains Thai text collected from multiple web sources, deduplicated using MinHash (80% Jaccard similarity threshold), and filtered for quality.
 
 
 
36
 
37
- ### Subsets
38
-
39
- | Subset | Description | Documents | Schema |
40
- |--------|-------------|-----------|--------|
41
- | `consensus` | Documents appearing in 2+ sources (high quality) | ~10.9M | text, id, source (list) |
42
- | `minhash_deduped` | Full deduplicated dataset | ~68M | text, id, source |
43
-
44
- ### Sources
45
-
46
- The dataset combines text from:
47
- - **C4** - Colossal Clean Crawled Corpus
48
- - **CulturaX** - Multilingual web corpus
49
- - **FineWeb2** - High-quality web text
50
- - **HPLT2** - High Performance Language Technologies
51
- - **SEA-CC** - Southeast Asian Common Crawl
52
 
53
  ## Usage
54
 
55
  ```python
56
  from datasets import load_dataset
57
 
58
- # Load consensus subset (recommended for high-quality pretraining)
59
- ds = load_dataset("AdaMLLab/ThaiMix", "consensus")
60
-
61
- # Load full deduplicated dataset
62
  ds = load_dataset("AdaMLLab/ThaiMix", "minhash_deduped")
 
63
  ```
64
 
65
- ## Schema
66
 
67
- ### Consensus subset
68
- - `text` (string): The document text
69
- - `id` (string): Unique document identifier (MD5 hash)
70
- - `source` (list[string]): List of sources where this document appears
71
 
72
- ### Minhash_deduped subset
73
- - `text` (string): The document text
74
- - `id` (string): Unique document identifier
75
- - `source` (string): Original source of the document
 
 
 
76
 
77
- ## Processing Pipeline
78
 
79
- 1. **Download**: Raw data from each source
80
- 2. **Filtering**: Language detection, quality filters (character ratios, stop words, etc.)
81
- 3. **MinHash Deduplication**: 80% Jaccard similarity threshold with 128 hash functions
82
- 4. **Consensus Building**: Identify documents appearing in 2+ sources using MinHash clusters
 
 
 
 
 
 
 
 
 
 
 
 
 
83
 
84
  ## License
85
 
86
- Apache 2.0
 
1
  ---
2
  language:
3
+ - th
4
+ license: other
5
  task_categories:
6
+ - text-generation
7
+ arxiv: 2512.18834
 
 
 
 
 
 
 
 
 
8
  configs:
9
+ - config_name: minhash_deduped
10
+ data_files:
11
+ - split: train
12
+ path: minhash_deduped/*.parquet
13
+ - config_name: matched
14
+ data_files:
15
+ - split: train
16
+ path: consensus/*.parquet
17
+ default: minhash_deduped
18
  ---
19
 
20
+ <p align="center">
21
+ <a href="https://huggingface.co/collections/AdaMLLab/mixminmatch">
22
+ <img src="https://img.shields.io/badge/🤗_Collection-MixMinMatch-blue" alt="MixMinMatch Collection">
23
+ </a>
24
+ </p>
25
 
26
+ ThaiMix ([https://arxiv.org/abs/2512.18834](https://arxiv.org/abs/2512.18834)) is a Thai pretraining corpus containing 70 billion tokens across 81 million documents (in the minhash subset). Rather than scraping the web again, ThaiMix combines five publicly available Thai datasets, applies Thai-specific quality filtering, and performs cross-dataset deduplication.
27
 
28
+ ## Subsets
29
 
30
+ | Subset | Documents | Tokens | Description |
31
+ |--------|-----------|--------|-------------|
32
+ | `minhash_deduped` | 81.3M | 70.5B | Document-level MinHash deduplication |
33
+ | `matched` | 10.9M | 12.3B | Documents appearing in 2+ source datasets |
34
 
35
+ The matched subset uses cross-dataset agreement as a signal for quality.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ## Usage
38
 
39
  ```python
40
  from datasets import load_dataset
41
 
 
 
 
 
42
  ds = load_dataset("AdaMLLab/ThaiMix", "minhash_deduped")
43
+ ds = load_dataset("AdaMLLab/ThaiMix", "matched")
44
  ```
45
 
46
+ ## Sources
47
 
48
+ Tokens were counted using `meta-llama/Llama-3.2-3B`'s tokenizer.
 
 
 
49
 
50
+ | Source | Survival Rate |
51
+ |--------|---------------|
52
+ | HPLT 2.0 | 75.1% |
53
+ | C4 | 72.6% |
54
+ | SEA-CC | 70.7% |
55
+ | CulturaX | 69.0% |
56
+ | FineWeb-2 | 61.7% |
57
 
58
+ ## Pipeline
59
 
60
+ 1. Quality filtering with Thai-specific thresholds (Thai script ratio, repetition patterns, language identification)
61
+ 2. Document-level MinHash deduplication (5-gram shingles, 14 bands, 8 hashes per band, similarity threshold 0.8)
62
+ 3. Cross-source matching to identify documents appearing in 2+ independent sources
63
+
64
+ ## Citation
65
+
66
+ ```bib
67
+ @misc{alrashed2025mixminmatch,
68
+ title={Mix, MinHash, and Match: Cross-Source Agreement for Multilingual Pretraining Datasets},
69
+ author={Sultan Alrashed and Francesco Orabona},
70
+ year={2025},
71
+ eprint={2512.18834v2},
72
+ archivePrefix={arXiv},
73
+ primaryClass={cs.CL},
74
+ url={https://arxiv.org/abs/2512.18834v2},
75
+ }
76
+ ```
77
 
78
  ## License
79
 
80
+ See individual source dataset licenses.