Datasets:

Languages:
Turkish
ArXiv:
License:
SultanR commited on
Commit
1c2f22d
·
verified ·
1 Parent(s): f33c838

Add files using upload-large-folder tool

Browse files
Files changed (50) hide show
  1. README.md +20 -1
  2. minhash_deduped/fineweb2/000_00004.parquet +3 -0
  3. minhash_deduped/fineweb2/000_00007.parquet +3 -0
  4. minhash_deduped/fineweb2/000_00009.parquet +3 -0
  5. minhash_deduped/fineweb2/000_00018.parquet +3 -0
  6. minhash_deduped/fineweb2/000_00019.parquet +3 -0
  7. quality_filtered/hplt2/tur_Latn_1/000_00000.parquet +3 -0
  8. quality_filtered/hplt2/tur_Latn_1/000_00001.parquet +3 -0
  9. quality_filtered/hplt2/tur_Latn_1/000_00002.parquet +3 -0
  10. quality_filtered/hplt2/tur_Latn_1/000_00003.parquet +3 -0
  11. quality_filtered/hplt2/tur_Latn_1/000_00004.parquet +3 -0
  12. quality_filtered/hplt2/tur_Latn_1/000_00006.parquet +3 -0
  13. quality_filtered/hplt2/tur_Latn_1/000_00007.parquet +3 -0
  14. quality_filtered/hplt2/tur_Latn_1/000_00008.parquet +3 -0
  15. quality_filtered/hplt2/tur_Latn_1/000_00009.parquet +3 -0
  16. quality_filtered/hplt2/tur_Latn_1/000_00010.parquet +3 -0
  17. quality_filtered/hplt2/tur_Latn_1/000_00011.parquet +3 -0
  18. quality_filtered/hplt2/tur_Latn_1/000_00012.parquet +3 -0
  19. quality_filtered/hplt2/tur_Latn_1/000_00013.parquet +3 -0
  20. quality_filtered/hplt2/tur_Latn_1/000_00015.parquet +3 -0
  21. quality_filtered/hplt2/tur_Latn_1/000_00016.parquet +3 -0
  22. quality_filtered/hplt2/tur_Latn_1/000_00017.parquet +3 -0
  23. quality_filtered/hplt2/tur_Latn_1/000_00019.parquet +3 -0
  24. quality_filtered/hplt2/tur_Latn_1/000_00021.parquet +3 -0
  25. quality_filtered/hplt2/tur_Latn_1/000_00022.parquet +3 -0
  26. quality_filtered/hplt2/tur_Latn_1/000_00023.parquet +3 -0
  27. quality_filtered/hplt2/tur_Latn_1/000_00025.parquet +3 -0
  28. quality_filtered/hplt2/tur_Latn_1/000_00026.parquet +3 -0
  29. quality_filtered/hplt2/tur_Latn_1/000_00029.parquet +3 -0
  30. quality_filtered/hplt2/tur_Latn_1/000_00030.parquet +3 -0
  31. quality_filtered/hplt2/tur_Latn_1/000_00031.parquet +3 -0
  32. quality_filtered/hplt2/tur_Latn_1/000_00032.parquet +3 -0
  33. quality_filtered/hplt2/tur_Latn_1/000_00033.parquet +3 -0
  34. quality_filtered/hplt2/tur_Latn_1/000_00034.parquet +3 -0
  35. quality_filtered/hplt2/tur_Latn_1/000_00035.parquet +3 -0
  36. quality_filtered/hplt2/tur_Latn_1/000_00036.parquet +3 -0
  37. quality_filtered/hplt2/tur_Latn_1/000_00037.parquet +3 -0
  38. quality_filtered/hplt2/tur_Latn_1/000_00041.parquet +3 -0
  39. quality_filtered/hplt2/tur_Latn_1/000_00042.parquet +3 -0
  40. quality_filtered/hplt2/tur_Latn_1/000_00043.parquet +3 -0
  41. quality_filtered/hplt2/tur_Latn_1/000_00044.parquet +3 -0
  42. quality_filtered/hplt2/tur_Latn_1/000_00045.parquet +3 -0
  43. quality_filtered/hplt2/tur_Latn_1/000_00046.parquet +3 -0
  44. quality_filtered/hplt2/tur_Latn_1/000_00047.parquet +3 -0
  45. quality_filtered/hplt2/tur_Latn_1/000_00048.parquet +3 -0
  46. quality_filtered/hplt2/tur_Latn_1/000_00050.parquet +3 -0
  47. quality_filtered/hplt2/tur_Latn_1/000_00051.parquet +3 -0
  48. quality_filtered/hplt2/tur_Latn_1/000_00052.parquet +3 -0
  49. quality_filtered/hplt2/tur_Latn_1/000_00053.parquet +3 -0
  50. turmix_hinmix_exact_guide_reproduce.md +443 -0
README.md CHANGED
@@ -93,4 +93,23 @@ Documents were filtered based on:
93
  - Document length constraints
94
  - Line quality metrics
95
  - Repetition detection (including Turkish-specific patterns)
96
- - Boilerplate/policy phrase removal
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
  - Document length constraints
94
  - Line quality metrics
95
  - Repetition detection (including Turkish-specific patterns)
96
+ - Boilerplate/policy phrase removal
97
+
98
+ Filter thresholds based on Fineweb-2 Turkish configuration.
99
+
100
+ ## Citation
101
+
102
+ If you use this dataset, please cite:
103
+
104
+ ```bibtex
105
+ @dataset{turmix2024,
106
+ title={TurMix: Turkish Pretraining Data Mix},
107
+ author={AdaMLLab},
108
+ year={2024},
109
+ publisher={Hugging Face}
110
+ }
111
+ ```
112
+
113
+ ## License
114
+
115
+ This dataset is released under CC-BY-4.0. Individual source datasets may have their own licenses.
minhash_deduped/fineweb2/000_00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a222aa5f41e9db7088c1ec37084bc8f55b09b3f17670b323867fc53e41163c0f
3
+ size 4011270479
minhash_deduped/fineweb2/000_00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3498720b7ee4e8462294d5170bb1efc7c326a483f6d50be1d24bc58674efae53
3
+ size 3779885381
minhash_deduped/fineweb2/000_00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56663ef3bc703daaefeb577d28b044f5cbe82fa938ab58ea0ff28e48542b69bd
3
+ size 3632204507
minhash_deduped/fineweb2/000_00018.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0186ba26a441905d6a3fee961e6ecc9f718f73457ae23bc15336e189923e0bad
3
+ size 3072161452
minhash_deduped/fineweb2/000_00019.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f603d15541ecdc80c89058cc1ec3b8e983fcacdc8a56017c03cf8a9af7deef0b
3
+ size 3024462769
quality_filtered/hplt2/tur_Latn_1/000_00000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89ec7ffc61df7dee1f54478978bbc6299436eb516b35f3e1091c122d539707aa
3
+ size 451982279
quality_filtered/hplt2/tur_Latn_1/000_00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e85035d11a1b189809ffb8727a7f334982b57681b50b9f922843a17280c88d5
3
+ size 460922135
quality_filtered/hplt2/tur_Latn_1/000_00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b052caf678d50e0d0c9c26098b827066826b7830f45cab5b678600b003304c6
3
+ size 458348213
quality_filtered/hplt2/tur_Latn_1/000_00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9df985f2cdab44367f93cfeb31d8227322330d2bde09ada3bdb8f3eb5f442e2
3
+ size 468043576
quality_filtered/hplt2/tur_Latn_1/000_00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e46f8b5d0aadfa3d78052b6888b4b919c6a792f68cffca55f5634cb5a1a8a929
3
+ size 470517924
quality_filtered/hplt2/tur_Latn_1/000_00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee0aed189d78be561c1be4e9393ac040e77daa0b8547ba6585023beb0effd03f
3
+ size 543065497
quality_filtered/hplt2/tur_Latn_1/000_00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7facac031456b9b6c38c8815dd08447cebfbd4f8ded09715480cfcbe40c85ed6
3
+ size 454244080
quality_filtered/hplt2/tur_Latn_1/000_00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06863d3f6feeddc10813a47f057d17e38ae19f4ea738ab0815ce191dd8daf24d
3
+ size 465539081
quality_filtered/hplt2/tur_Latn_1/000_00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4067cb441e4eab01e9b87c2ef456f8cf4bab890bfde95a06384ea4c14911f5a0
3
+ size 471775176
quality_filtered/hplt2/tur_Latn_1/000_00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8674924c0584c602d4adf49c090dafdcaf963c4607f1c488510143e8d335171a
3
+ size 485255459
quality_filtered/hplt2/tur_Latn_1/000_00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4da3f741509cdf9f3cd42d27868b21d3ca1848cf23e61b9bf6e8fddff429155e
3
+ size 525053184
quality_filtered/hplt2/tur_Latn_1/000_00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:176946a965570b0fc1c4e6b11d95dc915aaa8e2d823b936df3030400c1e75a07
3
+ size 506941203
quality_filtered/hplt2/tur_Latn_1/000_00013.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:608c59cf4fded9f9470cfa90fb8dafce877342b60845daf89193a145ff14639e
3
+ size 539339488
quality_filtered/hplt2/tur_Latn_1/000_00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77c0c4384f9911895e291005d4ad1e894ab5c5daaccd282b83115f5f9344dd57
3
+ size 537225849
quality_filtered/hplt2/tur_Latn_1/000_00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7085f2fc346967ba519de2384a9b6286c1a59b6ebe3a0e0fe021db615f354ef
3
+ size 530113287
quality_filtered/hplt2/tur_Latn_1/000_00017.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9584a6da2bba4184e384f84c8ff14c84dc063d10decd3e913696b5cbb5d171a9
3
+ size 536567254
quality_filtered/hplt2/tur_Latn_1/000_00019.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4aff94f5498ed418a99a0c70d9e39051141fdde7ec988d68955c8615ad9e78d
3
+ size 531375841
quality_filtered/hplt2/tur_Latn_1/000_00021.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44dd7cccd8829b1734bebb7eb25f0c0f43ec5d783007583d85e2a33a3eda0416
3
+ size 530025471
quality_filtered/hplt2/tur_Latn_1/000_00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d6ae9365a25d1d13156bacbf0cbd8cd449474a22cdf9c51d704f037c66e1b86
3
+ size 510861098
quality_filtered/hplt2/tur_Latn_1/000_00023.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3365ae7ab36eecab5de3d38bd5cd49e6f3aae93747415e36fb931065c66ca1c9
3
+ size 508567786
quality_filtered/hplt2/tur_Latn_1/000_00025.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8427d430843934c367d96b4b3c2a4c8470dd3e4aeb269cd2ea21630f4fd7d308
3
+ size 524286872
quality_filtered/hplt2/tur_Latn_1/000_00026.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0b89df7a7b7c6bcad35e599bf1902fb3e2d624e22fb38d1d120ec0e3157ae89
3
+ size 528586236
quality_filtered/hplt2/tur_Latn_1/000_00029.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bcb14c481472684da835ac99b158bd5ccebc34e7168eb26273c9d3fe646ec3f
3
+ size 539567864
quality_filtered/hplt2/tur_Latn_1/000_00030.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:100e96a5dc7c8fdf9d1ed722d41ed064da2b3884f1379bd13de12c507b2acd92
3
+ size 532932201
quality_filtered/hplt2/tur_Latn_1/000_00031.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fda7234365a93bed6b90eda1289a08bb58156e1e230fc06693898b6b16469bdf
3
+ size 534036330
quality_filtered/hplt2/tur_Latn_1/000_00032.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e313ba6304893ec244f75c9a6b91b67047176f4e26471a74f025601da85559b
3
+ size 536631441
quality_filtered/hplt2/tur_Latn_1/000_00033.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:053fee6d65c61060279ee17ecb0f2f532cf626fda518151c448e65a2a965a086
3
+ size 529032490
quality_filtered/hplt2/tur_Latn_1/000_00034.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6dd545837943d9e290f52a22d694741b000f302ba45e76abc3afe4e77095ace
3
+ size 522281643
quality_filtered/hplt2/tur_Latn_1/000_00035.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42b318e3acb3b9b206ec23ec967ba4035a3ef588b676298af0a582af255e3ba9
3
+ size 519326245
quality_filtered/hplt2/tur_Latn_1/000_00036.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:286b76e9300e5240073a102677bf0e55a626af8a45707caa06ff850bb642f8bd
3
+ size 515608428
quality_filtered/hplt2/tur_Latn_1/000_00037.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2fce7694fdb06ec581b25859394d50084c7792df900c80edfcc0afe6fc5193d
3
+ size 511422878
quality_filtered/hplt2/tur_Latn_1/000_00041.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:acbcd620b13294762aad17d2b7a3e1609d16f7df71e285e58fe72bd08e5ec343
3
+ size 317086096
quality_filtered/hplt2/tur_Latn_1/000_00042.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:daa53e1844d90d4130ff059c3cd5b700e1eaffee6fb1ee65b0c2afca8090bcc6
3
+ size 320394508
quality_filtered/hplt2/tur_Latn_1/000_00043.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0d4f848d5f5e3cfb3c38dd5f322e3cb649e675534f8a59820219586d04a0d75
3
+ size 318029352
quality_filtered/hplt2/tur_Latn_1/000_00044.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ccb7982a6ec9657808d63d859b73b0941ec2de6965daeb3b66faa745bdb716dc
3
+ size 323151696
quality_filtered/hplt2/tur_Latn_1/000_00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:675ecd590ba506b2e924dc5ad4bc6f75b20cbfdb2c730594712ad1ff8c277205
3
+ size 320987543
quality_filtered/hplt2/tur_Latn_1/000_00046.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a96937859886fca654e7ebafbef691c5a938258fe2bd2a2142356331dbc0abc0
3
+ size 320967582
quality_filtered/hplt2/tur_Latn_1/000_00047.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64e47ade826145f4296551532a40084848e413defabbb7e54bc5f55f30aa22d4
3
+ size 320665220
quality_filtered/hplt2/tur_Latn_1/000_00048.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:208b0d672d83671fdef280b8eb9a85426915a9d16487f8a11c24c9ddaf21a0b6
3
+ size 320874105
quality_filtered/hplt2/tur_Latn_1/000_00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee1bbefb3142064f104d5ba1e196a7893aeeab499311c8c4beac619c4706e05d
3
+ size 321562774
quality_filtered/hplt2/tur_Latn_1/000_00051.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59eceb9c1537be4029958dc671a6de1d5f455ec82becbbde130228373ceed795
3
+ size 320860300
quality_filtered/hplt2/tur_Latn_1/000_00052.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b26c8d480a2a7aefdc60438792c60645bdd1639c75f91f56d29e2be1528d8c6
3
+ size 324056438
quality_filtered/hplt2/tur_Latn_1/000_00053.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abb7f06ccb6229e27673602bf9c9a77531af7f41f059c95fb5aa99666087e969
3
+ size 322162088
turmix_hinmix_exact_guide_reproduce.md ADDED
@@ -0,0 +1,443 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TurMix & HinMix: Exact Reproduction Guide
2
+
3
+ This document provides a complete guide to reproducing the Turkish (TurMix) and Hindi (HinMix) pretraining data pipelines.
4
+
5
+ ## Table of Contents
6
+ 1. [Overview](#overview)
7
+ 2. [Environment Setup](#environment-setup)
8
+ 3. [Project Structure](#project-structure)
9
+ 4. [Data Sources](#data-sources)
10
+ 5. [Pipeline Stages](#pipeline-stages)
11
+ 6. [Stage 1: Download](#stage-1-download)
12
+ 7. [Stage 2: Quality Filtering](#stage-2-quality-filtering)
13
+ 8. [Stage 3: MinHash Deduplication](#stage-3-minhash-deduplication)
14
+ 9. [Stage 4: Consensus Subset Construction](#stage-4-consensus-subset-construction)
15
+ 10. [Stage 5: Upload to HuggingFace](#stage-5-upload-to-huggingface)
16
+ 11. [Final Statistics](#final-statistics)
17
+
18
+ ---
19
+
20
+ ## Overview
21
+
22
+ The pipeline processes web crawl data through the following stages:
23
+ 1. **Download**: Fetch data from HuggingFace using `huggingface-cli`
24
+ 2. **Quality Filter**: Apply language-specific quality filters using DataTrove
25
+ 3. **MinHash Dedup**: Remove near-duplicate documents within each source
26
+ 4. **Consensus**: Identify documents appearing in 2+ sources (exact text match)
27
+
28
+ ### Key Design Decisions
29
+ - Use `huggingface-cli download` with `--include` patterns for efficient selective downloads
30
+ - Process each source separately to avoid parquet schema conflicts
31
+ - Use 54 workers (CPU cores - 2) for parallel processing
32
+ - MinHash deduplication within each source (not cross-source)
33
+ - Consensus detection via exact text hash matching across sources
34
+
35
+ ---
36
+
37
+ ## Environment Setup
38
+
39
+ ### Prerequisites
40
+ ```bash
41
+ # Python 3.10+
42
+ conda create -n pretraining python=3.10
43
+ conda activate pretraining
44
+
45
+ # Install dependencies
46
+ pip install datasets datatrove pyarrow huggingface_hub
47
+ pip install fasttext-langdetect # For language detection
48
+ ```
49
+
50
+ ### HuggingFace Authentication
51
+ ```bash
52
+ huggingface-cli login
53
+ # Enter your HuggingFace token
54
+ ```
55
+
56
+ ---
57
+
58
+ ## Project Structure
59
+
60
+ ```
61
+ arabic-pretraining-mix-other-languages/
62
+ ├── run_pipeline.py # Main unified pipeline runner
63
+ ├── build_consensus_v2.py # Consensus subset builder (memory-efficient)
64
+ ├── filter_c4.py # C4 JSON filtering script
65
+ ├── fix_hplt2_filter.py # Turkish HPLT2 filtering (5 subfolders)
66
+ ├── fix_hindi_hplt2_filter.py # Hindi HPLT2 filtering
67
+ ├── src/
68
+ │ ├── config/
69
+ │ │ ├── common.py # Shared configs (paths, workers, MinHash params)
70
+ │ │ ├── datasets_tr.py # Turkish dataset definitions
71
+ │ │ └── datasets_hi.py # Hindi dataset definitions
72
+ │ ├── filters/
73
+ │ │ ├── base_quality.py # Base quality filter class
74
+ │ │ ├── tr_quality.py # Turkish quality filter
75
+ │ │ ├── hi_quality.py # Hindi quality filter
76
+ │ │ └── lang_config.py # Language-specific constants
77
+ │ └── dedup/
78
+ │ └── __init__.py # MinHash deduplication wrappers
79
+ └── data/
80
+ ├── hi/ # Hindi data
81
+ │ ├── downloads/ # Raw downloaded data
82
+ │ ├── filtered/ # Quality-filtered data
83
+ │ ├── deduped/ # MinHash-deduplicated data
84
+ │ ├── consensus/ # Consensus subset
85
+ │ └── minhash_signatures/ # MinHash signatures (preserved)
86
+ └── tr/ # Turkish data (same structure)
87
+ ```
88
+
89
+ ---
90
+
91
+ ## Data Sources
92
+
93
+ ### Hindi Sources (6)
94
+ | Source | HuggingFace Path | Subset/Config | Download Command |
95
+ |--------|------------------|---------------|------------------|
96
+ | HPLT-2 | `HPLT/HPLT2.0_cleaned` | `hin_Deva` | `huggingface-cli download HPLT/HPLT2.0_cleaned --include "hin_Deva/*" --local-dir ./data/hi/hplt2 --repo-type dataset` |
97
+ | Fineweb-2 | `HuggingFaceFW/fineweb-2` | `hin_Deva` | `huggingface-cli download HuggingFaceFW/fineweb-2 --include "data/hin_Deva/*" --local-dir ./data/hi/fineweb2 --repo-type dataset` |
98
+ | CulturaX | `uonlp/CulturaX` | `hi` | `huggingface-cli download uonlp/CulturaX --include "hi/*" --local-dir ./data/hi/culturax --repo-type dataset` |
99
+ | mC4 | `allenai/c4` | `hi` | `huggingface-cli download allenai/c4 --include "multilingual/c4-hi*" --local-dir ./data/hi/c4 --repo-type dataset` |
100
+ | Sangraha (verified) | `ai4bharat/sangraha` | `verified/hin` | `huggingface-cli download ai4bharat/sangraha --include "verified/hin/*" --local-dir ./data/hi/sangraha_verified --repo-type dataset` |
101
+ | Sangraha (unverified) | `ai4bharat/sangraha` | `unverified/hin` | `huggingface-cli download ai4bharat/sangraha --include "unverified/hin/*" --local-dir ./data/hi/sangraha_unverified --repo-type dataset` |
102
+
103
+ ### Turkish Sources (5)
104
+ | Source | HuggingFace Path | Subset/Config | Download Command |
105
+ |--------|------------------|---------------|------------------|
106
+ | HPLT-2 | `HPLT/HPLT2.0_cleaned` | `tur_Latn` | `huggingface-cli download HPLT/HPLT2.0_cleaned --include "tur_Latn*/*" --local-dir ./data/tr/hplt2 --repo-type dataset` |
107
+ | Fineweb-2 | `HuggingFaceFW/fineweb-2` | `tur_Latn` | `huggingface-cli download HuggingFaceFW/fineweb-2 --include "data/tur_Latn/*" --local-dir ./data/tr/fineweb2 --repo-type dataset` |
108
+ | CulturaX | `uonlp/CulturaX` | `tr` | `huggingface-cli download uonlp/CulturaX --include "tr/*" --local-dir ./data/tr/culturax --repo-type dataset` |
109
+ | mC4 | `allenai/c4` | `tr` | `huggingface-cli download allenai/c4 --include "multilingual/c4-tr*" --local-dir ./data/tr/c4 --repo-type dataset` |
110
+ | VNGRS | `vngrs-ai/vngrs-web-corpus` | N/A | `huggingface-cli download vngrs-ai/vngrs-web-corpus --local-dir ./data/tr/vngrs --repo-type dataset` |
111
+
112
+ **Note**: Turkish HPLT-2 is split into 5 subfolders: `tur_Latn_1` through `tur_Latn_5`.
113
+
114
+ ---
115
+
116
+ ## Pipeline Stages
117
+
118
+ ### Stage 1: Download
119
+
120
+ Downloads are performed using `huggingface-cli download` with `--include` patterns:
121
+
122
+ ```bash
123
+ # Example: Download Hindi CulturaX
124
+ huggingface-cli download uonlp/CulturaX \
125
+ --include "hi/*" \
126
+ --local-dir ./data/hi/culturax \
127
+ --repo-type dataset
128
+ ```
129
+
130
+ The `run_pipeline.py` script automates this:
131
+ ```bash
132
+ python run_pipeline.py --language hi --stage download
133
+ python run_pipeline.py --language tr --stage download
134
+ ```
135
+
136
+ ---
137
+
138
+ ### Stage 2: Quality Filtering
139
+
140
+ Quality filtering uses DataTrove with custom language-specific filters.
141
+
142
+ #### Filter Configuration (from Fineweb-2)
143
+
144
+ **Hindi Filter Thresholds** (`src/filters/hi_quality.py`):
145
+ ```python
146
+ HINDI_FILTER_CONFIG = {
147
+ "min_script_ratio": 0.5, # Devanagari script ratio
148
+ "lang_score_threshold": 0.692,
149
+ "dup_line_frac": 0.206,
150
+ "new_line_ratio": 0.316,
151
+ "min_avg_word_length": 2,
152
+ "max_avg_word_length": 21,
153
+ "line_punct_thr": 0.091,
154
+ "non_alpha_words_ratio": 0.837,
155
+ "top_5_gram_frac": 0.135,
156
+ "top_10_gram_frac": 0.090,
157
+ }
158
+ ```
159
+
160
+ **Turkish Filter Thresholds** (`src/filters/tr_quality.py`):
161
+ ```python
162
+ TURKISH_FILTER_CONFIG = {
163
+ "min_script_ratio": 0.65, # Turkish Latin script ratio
164
+ "lang_score_threshold": 0.875,
165
+ "dup_line_frac": 0.272,
166
+ "new_line_ratio": 0.222,
167
+ "min_avg_word_length": 3,
168
+ "max_avg_word_length": 21,
169
+ "line_punct_thr": 0.091,
170
+ "non_alpha_words_ratio": 0.773,
171
+ "top_5_gram_frac": 0.154,
172
+ "top_10_gram_frac": 0.103,
173
+ }
174
+ ```
175
+
176
+ #### Output Schema Normalization
177
+
178
+ All filtered output uses a unified schema:
179
+ ```python
180
+ OUTPUT_SCHEMA = pa.schema([
181
+ ("text", pa.string()),
182
+ ("id", pa.string()),
183
+ ("metadata", pa.struct([
184
+ ("source", pa.string()),
185
+ ])),
186
+ ])
187
+ ```
188
+
189
+ #### Running Filtering
190
+
191
+ ```bash
192
+ # Main parquet datasets (culturax, fineweb2, sangraha, vngrs)
193
+ python run_pipeline.py --language hi --stage filter
194
+ python run_pipeline.py --language tr --stage filter
195
+
196
+ # C4 JSON files (requires separate script due to JSON format)
197
+ python filter_c4.py --language hi
198
+ python filter_c4.py --language tr
199
+
200
+ # HPLT2 (requires separate handling due to nested structure)
201
+ python fix_hindi_hplt2_filter.py
202
+ python fix_hplt2_filter.py # Turkish - 5 subfolders
203
+ ```
204
+
205
+ ---
206
+
207
+ ### Stage 3: MinHash Deduplication
208
+
209
+ MinHash deduplication removes near-duplicate documents within each source.
210
+
211
+ #### MinHash Configuration
212
+ ```python
213
+ MINHASH_CONFIG = {
214
+ "n_grams": 5,
215
+ "num_buckets": 14,
216
+ "hashes_per_bucket": 8,
217
+ "similarity_threshold": 0.8,
218
+ }
219
+ ```
220
+
221
+ #### MinHash Stages (via DataTrove)
222
+ 1. **Stage 1 - Signatures**: Generate MinHash signatures for each document
223
+ 2. **Stage 2 - Buckets**: Group documents by LSH buckets to find candidates
224
+ 3. **Stage 3 - Cluster**: Cluster similar documents together
225
+ 4. **Stage 4 - Filter**: Keep one representative per cluster, write deduped output
226
+
227
+ #### Running MinHash
228
+ ```bash
229
+ python run_pipeline.py --language hi --stage minhash
230
+ python run_pipeline.py --language tr --stage minhash
231
+ ```
232
+
233
+ **Runtime**: Hindi ~17 hours, Turkish ~30 hours (on 56-core machine)
234
+
235
+ ---
236
+
237
+ ### Stage 4: Consensus Subset Construction
238
+
239
+ The consensus subset identifies documents that appear in 2+ sources using exact text hash matching.
240
+
241
+ #### Algorithm (Two-Pass, Memory-Efficient)
242
+
243
+ **Pass 1**: Build hash-to-sources index
244
+ ```python
245
+ # For each document, compute MD5 hash of normalized text
246
+ # Store only: hash -> set of sources (not full text)
247
+ def compute_text_hash(text: str) -> str:
248
+ normalized = ' '.join(text.lower().split())
249
+ return hashlib.md5(normalized.encode('utf-8')).hexdigest()
250
+
251
+ # Pass 1: hash_to_sources[hash].add(source)
252
+ ```
253
+
254
+ **Pass 2**: Extract documents with multi-source hashes
255
+ ```python
256
+ # Re-read data, collect documents where hash appears in 2+ sources
257
+ # Store full document with sources list
258
+ ```
259
+
260
+ #### Consensus Output Schema
261
+ ```python
262
+ schema = pa.schema([
263
+ ('text', pa.string()),
264
+ ('id', pa.string()),
265
+ ('sources', pa.list_(pa.string())), # e.g., ["c4", "culturax"]
266
+ ('all_ids', pa.list_(pa.string())), # e.g., ["c4:url1", "culturax:url2"]
267
+ ('metadata', pa.struct([
268
+ ('source', pa.string()), # "consensus"
269
+ ])),
270
+ ])
271
+ ```
272
+
273
+ #### Running Consensus Builder
274
+ ```bash
275
+ python build_consensus_v2.py --language hi
276
+ python build_consensus_v2.py --language tr
277
+ ```
278
+
279
+ **Runtime**: Hindi ~2 hours, Turkish ~7 hours
280
+
281
+ ---
282
+
283
+ ### Stage 5: Upload to HuggingFace
284
+
285
+ #### Create Repositories
286
+ ```bash
287
+ huggingface-cli repo create HinMix --organization AdaMLLab --type dataset
288
+ huggingface-cli repo create TurMix --organization AdaMLLab --type dataset
289
+ ```
290
+
291
+ #### Staging Directory Structure
292
+ ```
293
+ hf_staging/HinMix/
294
+ ├── README.md
295
+ ├── minhash_deduped/
296
+ │ ├── c4/*.parquet
297
+ │ ├── culturax/*.parquet
298
+ │ ├── fineweb2/*.parquet
299
+ │ ├── hplt2/*.parquet
300
+ │ ├── sangraha_unverified/*.parquet
301
+ │ └── sangraha_verified/*.parquet
302
+ ├── quality_filtered/
303
+ │ └── (same structure as minhash_deduped)
304
+ └── consensus/
305
+ └── consensus.parquet
306
+ ```
307
+
308
+ #### Upload Command
309
+ ```bash
310
+ # Use upload-large-folder for large datasets
311
+ hf upload-large-folder AdaMLLab/HinMix hf_staging/HinMix --repo-type dataset --num-workers 8
312
+ hf upload-large-folder AdaMLLab/TurMix hf_staging/TurMix --repo-type dataset --num-workers 8
313
+ ```
314
+
315
+ ---
316
+
317
+ ## Final Statistics
318
+
319
+ ### Hindi (HinMix)
320
+
321
+ | Stage | Documents | Size | Notes |
322
+ |-------|-----------|------|-------|
323
+ | **Quality Filtered** | ~99M | 231GB | All 6 sources combined |
324
+ | **MinHash Deduped** | ~60M | 136GB | 40% reduction |
325
+ | **Consensus** | 1.92M | 3.7GB | Docs in 2+ sources |
326
+
327
+ **Consensus Source Participation**:
328
+ - fineweb2: 1,602,172
329
+ - hplt2: 1,194,132
330
+ - sangraha_unverified: 600,851
331
+ - culturax: 277,990
332
+ - sangraha_verified: 153,060
333
+ - c4: 71,462
334
+
335
+ ### Turkish (TurMix)
336
+
337
+ | Stage | Documents | Size | Notes |
338
+ |-------|-----------|------|-------|
339
+ | **Quality Filtered** | ~49M | 658GB | All 5 sources combined |
340
+ | **MinHash Deduped** | ~27M | 359GB | 46% reduction |
341
+ | **Consensus** | 7.84M | 13GB | Docs in 2+ sources |
342
+
343
+ **Consensus Source Participation**:
344
+ - fineweb2: 7,217,270
345
+ - hplt2: 7,075,189
346
+ - culturax: 686,152
347
+ - c4: 419,307
348
+ - vngrs: 402,760
349
+
350
+ ---
351
+
352
+ ## Key Scripts
353
+
354
+ ### run_pipeline.py (Main Entry Point)
355
+ ```python
356
+ #!/usr/bin/env python3
357
+ """
358
+ Usage:
359
+ python run_pipeline.py --language hi --stage download
360
+ python run_pipeline.py --language hi --stage filter
361
+ python run_pipeline.py --language hi --stage minhash
362
+ python run_pipeline.py --language tr --stage all
363
+ """
364
+ ```
365
+
366
+ ### build_consensus_v2.py (Consensus Builder)
367
+ ```python
368
+ #!/usr/bin/env python3
369
+ """
370
+ Memory-efficient two-pass consensus builder.
371
+ Usage:
372
+ python build_consensus_v2.py --language hi
373
+ python build_consensus_v2.py --language tr
374
+ """
375
+ ```
376
+
377
+ ### filter_c4.py (C4 JSON Filtering)
378
+ ```python
379
+ #!/usr/bin/env python3
380
+ """
381
+ Filters C4 JSON files (not parquet) with schema normalization.
382
+ Usage:
383
+ python filter_c4.py --language hi
384
+ python filter_c4.py --language tr
385
+ """
386
+ ```
387
+
388
+ ---
389
+
390
+ ## Troubleshooting
391
+
392
+ ### Common Issues
393
+
394
+ 1. **Storage Full During MinHash**
395
+ - MinHash signatures can grow to 300+ GB
396
+ - Ensure at least 500GB free space before starting
397
+ - If interrupted, clean `minhash_signatures/`, `minhash_buckets/`, `minhash_clusters/` and restart
398
+
399
+ 2. **Memory Issues During Consensus**
400
+ - Use `build_consensus_v2.py` (two-pass, memory-efficient)
401
+ - Original `build_consensus.py` requires 100+ GB RAM
402
+
403
+ 3. **C4 Schema Mismatch**
404
+ - C4 is JSON, not parquet
405
+ - Use `filter_c4.py` with `JsonlReader` and schema adapter
406
+
407
+ 4. **HPLT2 Nested Folders**
408
+ - Turkish HPLT2 has 5 subfolders (`tur_Latn_1` to `tur_Latn_5`)
409
+ - Use `fix_hplt2_filter.py` which handles all subfolders
410
+
411
+ ### Recovery Commands
412
+ ```bash
413
+ # Check running processes
414
+ ps aux | grep "run_pipeline\|build_consensus" | grep -v grep
415
+
416
+ # Check disk usage
417
+ df -h /home/alrashsm/Documents/Github/arabic-pretraining-mix-other-languages/data/
418
+
419
+ # Check MinHash progress
420
+ ls data/hi/logs/minhash_sig/completions/ | wc -l # Should reach 54
421
+ tail -20 data/hi/minhash.log
422
+ ```
423
+
424
+ ---
425
+
426
+ ## License
427
+
428
+ This pipeline and resulting datasets are released under CC-BY-4.0.
429
+ Individual source datasets have their own licenses - refer to original sources.
430
+
431
+ ---
432
+
433
+ ## Citation
434
+
435
+ ```bibtex
436
+ @dataset{hinmix_turmix_2024,
437
+ title={HinMix and TurMix: Hindi and Turkish Pretraining Data Mixes},
438
+ author={AdaMLLab},
439
+ year={2024},
440
+ publisher={Hugging Face},
441
+ url={https://huggingface.co/AdaMLLab}
442
+ }
443
+ ```