Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Turkish
ArXiv:
License:
SultanR commited on
Commit
2c6f8cb
·
verified ·
1 Parent(s): 1d53307

Delete turmix_hinmix_exact_guide_reproduce.md

Browse files
turmix_hinmix_exact_guide_reproduce.md DELETED
@@ -1,443 +0,0 @@
1
- # TurMix & HinMix: Exact Reproduction Guide
2
-
3
- This document provides a complete guide to reproducing the Turkish (TurMix) and Hindi (HinMix) pretraining data pipelines.
4
-
5
- ## Table of Contents
6
- 1. [Overview](#overview)
7
- 2. [Environment Setup](#environment-setup)
8
- 3. [Project Structure](#project-structure)
9
- 4. [Data Sources](#data-sources)
10
- 5. [Pipeline Stages](#pipeline-stages)
11
- 6. [Stage 1: Download](#stage-1-download)
12
- 7. [Stage 2: Quality Filtering](#stage-2-quality-filtering)
13
- 8. [Stage 3: MinHash Deduplication](#stage-3-minhash-deduplication)
14
- 9. [Stage 4: Consensus Subset Construction](#stage-4-consensus-subset-construction)
15
- 10. [Stage 5: Upload to HuggingFace](#stage-5-upload-to-huggingface)
16
- 11. [Final Statistics](#final-statistics)
17
-
18
- ---
19
-
20
- ## Overview
21
-
22
- The pipeline processes web crawl data through the following stages:
23
- 1. **Download**: Fetch data from HuggingFace using `huggingface-cli`
24
- 2. **Quality Filter**: Apply language-specific quality filters using DataTrove
25
- 3. **MinHash Dedup**: Remove near-duplicate documents within each source
26
- 4. **Consensus**: Identify documents appearing in 2+ sources (exact text match)
27
-
28
- ### Key Design Decisions
29
- - Use `huggingface-cli download` with `--include` patterns for efficient selective downloads
30
- - Process each source separately to avoid parquet schema conflicts
31
- - Use 54 workers (CPU cores - 2) for parallel processing
32
- - MinHash deduplication within each source (not cross-source)
33
- - Consensus detection via exact text hash matching across sources
34
-
35
- ---
36
-
37
- ## Environment Setup
38
-
39
- ### Prerequisites
40
- ```bash
41
- # Python 3.10+
42
- conda create -n pretraining python=3.10
43
- conda activate pretraining
44
-
45
- # Install dependencies
46
- pip install datasets datatrove pyarrow huggingface_hub
47
- pip install fasttext-langdetect # For language detection
48
- ```
49
-
50
- ### HuggingFace Authentication
51
- ```bash
52
- huggingface-cli login
53
- # Enter your HuggingFace token
54
- ```
55
-
56
- ---
57
-
58
- ## Project Structure
59
-
60
- ```
61
- arabic-pretraining-mix-other-languages/
62
- ├── run_pipeline.py # Main unified pipeline runner
63
- ├── build_consensus_v2.py # Consensus subset builder (memory-efficient)
64
- ├── filter_c4.py # C4 JSON filtering script
65
- ├── fix_hplt2_filter.py # Turkish HPLT2 filtering (5 subfolders)
66
- ├── fix_hindi_hplt2_filter.py # Hindi HPLT2 filtering
67
- ├── src/
68
- │ ├── config/
69
- │ │ ├── common.py # Shared configs (paths, workers, MinHash params)
70
- │ │ ├── datasets_tr.py # Turkish dataset definitions
71
- │ │ └── datasets_hi.py # Hindi dataset definitions
72
- │ ├── filters/
73
- │ │ ├── base_quality.py # Base quality filter class
74
- │ │ ├── tr_quality.py # Turkish quality filter
75
- │ │ ├── hi_quality.py # Hindi quality filter
76
- │ │ └── lang_config.py # Language-specific constants
77
- │ └── dedup/
78
- │ └── __init__.py # MinHash deduplication wrappers
79
- └── data/
80
- ├── hi/ # Hindi data
81
- │ ├── downloads/ # Raw downloaded data
82
- │ ├── filtered/ # Quality-filtered data
83
- │ ├── deduped/ # MinHash-deduplicated data
84
- │ ├── consensus/ # Consensus subset
85
- │ └── minhash_signatures/ # MinHash signatures (preserved)
86
- └── tr/ # Turkish data (same structure)
87
- ```
88
-
89
- ---
90
-
91
- ## Data Sources
92
-
93
- ### Hindi Sources (6)
94
- | Source | HuggingFace Path | Subset/Config | Download Command |
95
- |--------|------------------|---------------|------------------|
96
- | HPLT-2 | `HPLT/HPLT2.0_cleaned` | `hin_Deva` | `huggingface-cli download HPLT/HPLT2.0_cleaned --include "hin_Deva/*" --local-dir ./data/hi/hplt2 --repo-type dataset` |
97
- | Fineweb-2 | `HuggingFaceFW/fineweb-2` | `hin_Deva` | `huggingface-cli download HuggingFaceFW/fineweb-2 --include "data/hin_Deva/*" --local-dir ./data/hi/fineweb2 --repo-type dataset` |
98
- | CulturaX | `uonlp/CulturaX` | `hi` | `huggingface-cli download uonlp/CulturaX --include "hi/*" --local-dir ./data/hi/culturax --repo-type dataset` |
99
- | mC4 | `allenai/c4` | `hi` | `huggingface-cli download allenai/c4 --include "multilingual/c4-hi*" --local-dir ./data/hi/c4 --repo-type dataset` |
100
- | Sangraha (verified) | `ai4bharat/sangraha` | `verified/hin` | `huggingface-cli download ai4bharat/sangraha --include "verified/hin/*" --local-dir ./data/hi/sangraha_verified --repo-type dataset` |
101
- | Sangraha (unverified) | `ai4bharat/sangraha` | `unverified/hin` | `huggingface-cli download ai4bharat/sangraha --include "unverified/hin/*" --local-dir ./data/hi/sangraha_unverified --repo-type dataset` |
102
-
103
- ### Turkish Sources (5)
104
- | Source | HuggingFace Path | Subset/Config | Download Command |
105
- |--------|------------------|---------------|------------------|
106
- | HPLT-2 | `HPLT/HPLT2.0_cleaned` | `tur_Latn` | `huggingface-cli download HPLT/HPLT2.0_cleaned --include "tur_Latn*/*" --local-dir ./data/tr/hplt2 --repo-type dataset` |
107
- | Fineweb-2 | `HuggingFaceFW/fineweb-2` | `tur_Latn` | `huggingface-cli download HuggingFaceFW/fineweb-2 --include "data/tur_Latn/*" --local-dir ./data/tr/fineweb2 --repo-type dataset` |
108
- | CulturaX | `uonlp/CulturaX` | `tr` | `huggingface-cli download uonlp/CulturaX --include "tr/*" --local-dir ./data/tr/culturax --repo-type dataset` |
109
- | mC4 | `allenai/c4` | `tr` | `huggingface-cli download allenai/c4 --include "multilingual/c4-tr*" --local-dir ./data/tr/c4 --repo-type dataset` |
110
- | VNGRS | `vngrs-ai/vngrs-web-corpus` | N/A | `huggingface-cli download vngrs-ai/vngrs-web-corpus --local-dir ./data/tr/vngrs --repo-type dataset` |
111
-
112
- **Note**: Turkish HPLT-2 is split into 5 subfolders: `tur_Latn_1` through `tur_Latn_5`.
113
-
114
- ---
115
-
116
- ## Pipeline Stages
117
-
118
- ### Stage 1: Download
119
-
120
- Downloads are performed using `huggingface-cli download` with `--include` patterns:
121
-
122
- ```bash
123
- # Example: Download Hindi CulturaX
124
- huggingface-cli download uonlp/CulturaX \
125
- --include "hi/*" \
126
- --local-dir ./data/hi/culturax \
127
- --repo-type dataset
128
- ```
129
-
130
- The `run_pipeline.py` script automates this:
131
- ```bash
132
- python run_pipeline.py --language hi --stage download
133
- python run_pipeline.py --language tr --stage download
134
- ```
135
-
136
- ---
137
-
138
- ### Stage 2: Quality Filtering
139
-
140
- Quality filtering uses DataTrove with custom language-specific filters.
141
-
142
- #### Filter Configuration (from Fineweb-2)
143
-
144
- **Hindi Filter Thresholds** (`src/filters/hi_quality.py`):
145
- ```python
146
- HINDI_FILTER_CONFIG = {
147
- "min_script_ratio": 0.5, # Devanagari script ratio
148
- "lang_score_threshold": 0.692,
149
- "dup_line_frac": 0.206,
150
- "new_line_ratio": 0.316,
151
- "min_avg_word_length": 2,
152
- "max_avg_word_length": 21,
153
- "line_punct_thr": 0.091,
154
- "non_alpha_words_ratio": 0.837,
155
- "top_5_gram_frac": 0.135,
156
- "top_10_gram_frac": 0.090,
157
- }
158
- ```
159
-
160
- **Turkish Filter Thresholds** (`src/filters/tr_quality.py`):
161
- ```python
162
- TURKISH_FILTER_CONFIG = {
163
- "min_script_ratio": 0.65, # Turkish Latin script ratio
164
- "lang_score_threshold": 0.875,
165
- "dup_line_frac": 0.272,
166
- "new_line_ratio": 0.222,
167
- "min_avg_word_length": 3,
168
- "max_avg_word_length": 21,
169
- "line_punct_thr": 0.091,
170
- "non_alpha_words_ratio": 0.773,
171
- "top_5_gram_frac": 0.154,
172
- "top_10_gram_frac": 0.103,
173
- }
174
- ```
175
-
176
- #### Output Schema Normalization
177
-
178
- All filtered output uses a unified schema:
179
- ```python
180
- OUTPUT_SCHEMA = pa.schema([
181
- ("text", pa.string()),
182
- ("id", pa.string()),
183
- ("metadata", pa.struct([
184
- ("source", pa.string()),
185
- ])),
186
- ])
187
- ```
188
-
189
- #### Running Filtering
190
-
191
- ```bash
192
- # Main parquet datasets (culturax, fineweb2, sangraha, vngrs)
193
- python run_pipeline.py --language hi --stage filter
194
- python run_pipeline.py --language tr --stage filter
195
-
196
- # C4 JSON files (requires separate script due to JSON format)
197
- python filter_c4.py --language hi
198
- python filter_c4.py --language tr
199
-
200
- # HPLT2 (requires separate handling due to nested structure)
201
- python fix_hindi_hplt2_filter.py
202
- python fix_hplt2_filter.py # Turkish - 5 subfolders
203
- ```
204
-
205
- ---
206
-
207
- ### Stage 3: MinHash Deduplication
208
-
209
- MinHash deduplication removes near-duplicate documents within each source.
210
-
211
- #### MinHash Configuration
212
- ```python
213
- MINHASH_CONFIG = {
214
- "n_grams": 5,
215
- "num_buckets": 14,
216
- "hashes_per_bucket": 8,
217
- "similarity_threshold": 0.8,
218
- }
219
- ```
220
-
221
- #### MinHash Stages (via DataTrove)
222
- 1. **Stage 1 - Signatures**: Generate MinHash signatures for each document
223
- 2. **Stage 2 - Buckets**: Group documents by LSH buckets to find candidates
224
- 3. **Stage 3 - Cluster**: Cluster similar documents together
225
- 4. **Stage 4 - Filter**: Keep one representative per cluster, write deduped output
226
-
227
- #### Running MinHash
228
- ```bash
229
- python run_pipeline.py --language hi --stage minhash
230
- python run_pipeline.py --language tr --stage minhash
231
- ```
232
-
233
- **Runtime**: Hindi ~17 hours, Turkish ~30 hours (on 56-core machine)
234
-
235
- ---
236
-
237
- ### Stage 4: Consensus Subset Construction
238
-
239
- The consensus subset identifies documents that appear in 2+ sources using exact text hash matching.
240
-
241
- #### Algorithm (Two-Pass, Memory-Efficient)
242
-
243
- **Pass 1**: Build hash-to-sources index
244
- ```python
245
- # For each document, compute MD5 hash of normalized text
246
- # Store only: hash -> set of sources (not full text)
247
- def compute_text_hash(text: str) -> str:
248
- normalized = ' '.join(text.lower().split())
249
- return hashlib.md5(normalized.encode('utf-8')).hexdigest()
250
-
251
- # Pass 1: hash_to_sources[hash].add(source)
252
- ```
253
-
254
- **Pass 2**: Extract documents with multi-source hashes
255
- ```python
256
- # Re-read data, collect documents where hash appears in 2+ sources
257
- # Store full document with sources list
258
- ```
259
-
260
- #### Consensus Output Schema
261
- ```python
262
- schema = pa.schema([
263
- ('text', pa.string()),
264
- ('id', pa.string()),
265
- ('sources', pa.list_(pa.string())), # e.g., ["c4", "culturax"]
266
- ('all_ids', pa.list_(pa.string())), # e.g., ["c4:url1", "culturax:url2"]
267
- ('metadata', pa.struct([
268
- ('source', pa.string()), # "consensus"
269
- ])),
270
- ])
271
- ```
272
-
273
- #### Running Consensus Builder
274
- ```bash
275
- python build_consensus_v2.py --language hi
276
- python build_consensus_v2.py --language tr
277
- ```
278
-
279
- **Runtime**: Hindi ~2 hours, Turkish ~7 hours
280
-
281
- ---
282
-
283
- ### Stage 5: Upload to HuggingFace
284
-
285
- #### Create Repositories
286
- ```bash
287
- huggingface-cli repo create HinMix --organization AdaMLLab --type dataset
288
- huggingface-cli repo create TurMix --organization AdaMLLab --type dataset
289
- ```
290
-
291
- #### Staging Directory Structure
292
- ```
293
- hf_staging/HinMix/
294
- ├── README.md
295
- ├── minhash_deduped/
296
- │ ├── c4/*.parquet
297
- │ ├── culturax/*.parquet
298
- │ ├── fineweb2/*.parquet
299
- │ ├── hplt2/*.parquet
300
- │ ├── sangraha_unverified/*.parquet
301
- │ └── sangraha_verified/*.parquet
302
- ├── quality_filtered/
303
- │ └── (same structure as minhash_deduped)
304
- └── consensus/
305
- └── consensus.parquet
306
- ```
307
-
308
- #### Upload Command
309
- ```bash
310
- # Use upload-large-folder for large datasets
311
- hf upload-large-folder AdaMLLab/HinMix hf_staging/HinMix --repo-type dataset --num-workers 8
312
- hf upload-large-folder AdaMLLab/TurMix hf_staging/TurMix --repo-type dataset --num-workers 8
313
- ```
314
-
315
- ---
316
-
317
- ## Final Statistics
318
-
319
- ### Hindi (HinMix)
320
-
321
- | Stage | Documents | Size | Notes |
322
- |-------|-----------|------|-------|
323
- | **Quality Filtered** | ~99M | 231GB | All 6 sources combined |
324
- | **MinHash Deduped** | ~60M | 136GB | 40% reduction |
325
- | **Consensus** | 1.92M | 3.7GB | Docs in 2+ sources |
326
-
327
- **Consensus Source Participation**:
328
- - fineweb2: 1,602,172
329
- - hplt2: 1,194,132
330
- - sangraha_unverified: 600,851
331
- - culturax: 277,990
332
- - sangraha_verified: 153,060
333
- - c4: 71,462
334
-
335
- ### Turkish (TurMix)
336
-
337
- | Stage | Documents | Size | Notes |
338
- |-------|-----------|------|-------|
339
- | **Quality Filtered** | ~49M | 658GB | All 5 sources combined |
340
- | **MinHash Deduped** | ~27M | 359GB | 46% reduction |
341
- | **Consensus** | 7.84M | 13GB | Docs in 2+ sources |
342
-
343
- **Consensus Source Participation**:
344
- - fineweb2: 7,217,270
345
- - hplt2: 7,075,189
346
- - culturax: 686,152
347
- - c4: 419,307
348
- - vngrs: 402,760
349
-
350
- ---
351
-
352
- ## Key Scripts
353
-
354
- ### run_pipeline.py (Main Entry Point)
355
- ```python
356
- #!/usr/bin/env python3
357
- """
358
- Usage:
359
- python run_pipeline.py --language hi --stage download
360
- python run_pipeline.py --language hi --stage filter
361
- python run_pipeline.py --language hi --stage minhash
362
- python run_pipeline.py --language tr --stage all
363
- """
364
- ```
365
-
366
- ### build_consensus_v2.py (Consensus Builder)
367
- ```python
368
- #!/usr/bin/env python3
369
- """
370
- Memory-efficient two-pass consensus builder.
371
- Usage:
372
- python build_consensus_v2.py --language hi
373
- python build_consensus_v2.py --language tr
374
- """
375
- ```
376
-
377
- ### filter_c4.py (C4 JSON Filtering)
378
- ```python
379
- #!/usr/bin/env python3
380
- """
381
- Filters C4 JSON files (not parquet) with schema normalization.
382
- Usage:
383
- python filter_c4.py --language hi
384
- python filter_c4.py --language tr
385
- """
386
- ```
387
-
388
- ---
389
-
390
- ## Troubleshooting
391
-
392
- ### Common Issues
393
-
394
- 1. **Storage Full During MinHash**
395
- - MinHash signatures can grow to 300+ GB
396
- - Ensure at least 500GB free space before starting
397
- - If interrupted, clean `minhash_signatures/`, `minhash_buckets/`, `minhash_clusters/` and restart
398
-
399
- 2. **Memory Issues During Consensus**
400
- - Use `build_consensus_v2.py` (two-pass, memory-efficient)
401
- - Original `build_consensus.py` requires 100+ GB RAM
402
-
403
- 3. **C4 Schema Mismatch**
404
- - C4 is JSON, not parquet
405
- - Use `filter_c4.py` with `JsonlReader` and schema adapter
406
-
407
- 4. **HPLT2 Nested Folders**
408
- - Turkish HPLT2 has 5 subfolders (`tur_Latn_1` to `tur_Latn_5`)
409
- - Use `fix_hplt2_filter.py` which handles all subfolders
410
-
411
- ### Recovery Commands
412
- ```bash
413
- # Check running processes
414
- ps aux | grep "run_pipeline\|build_consensus" | grep -v grep
415
-
416
- # Check disk usage
417
- df -h /home/alrashsm/Documents/Github/arabic-pretraining-mix-other-languages/data/
418
-
419
- # Check MinHash progress
420
- ls data/hi/logs/minhash_sig/completions/ | wc -l # Should reach 54
421
- tail -20 data/hi/minhash.log
422
- ```
423
-
424
- ---
425
-
426
- ## License
427
-
428
- This pipeline and resulting datasets are released under CC-BY-4.0.
429
- Individual source datasets have their own licenses - refer to original sources.
430
-
431
- ---
432
-
433
- ## Citation
434
-
435
- ```bibtex
436
- @dataset{hinmix_turmix_2024,
437
- title={HinMix and TurMix: Hindi and Turkish Pretraining Data Mixes},
438
- author={AdaMLLab},
439
- year={2024},
440
- publisher={Hugging Face},
441
- url={https://huggingface.co/AdaMLLab}
442
- }
443
- ```