suayptalha commited on
Commit
e7f6921
·
verified ·
1 Parent(s): 8e36c53

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md CHANGED
@@ -9,9 +9,69 @@ dataset_info:
9
  num_examples: 2005712
10
  download_size: 1106679567
11
  dataset_size: 1784778472
 
 
 
 
 
 
 
 
12
  configs:
13
  - config_name: default
14
  data_files:
15
  - split: train
16
  path: data/train-*
 
 
 
 
17
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  num_examples: 2005712
10
  download_size: 1106679567
11
  dataset_size: 1784778472
12
+ tags:
13
+ - turkish
14
+ - pretraining
15
+ - masked-language-modeling
16
+ - diffusion
17
+ - wikipedia
18
+ - oscar
19
+ - news
20
  configs:
21
  - config_name: default
22
  data_files:
23
  - split: train
24
  path: data/train-*
25
+ task_categories:
26
+ - text-generation
27
+ language:
28
+ - tr
29
  ---
30
+
31
+ # DiffutronLM-Pretraining-Corpus
32
+
33
+ **DiffutronLM-Pretraining-Corpus** is the comprehensive, filtered Turkish text dataset used during the Continual Pre-training (CPT) phase of the [Diffutron](https://huggingface.co/collections/diffutron/diffutronlm) language models.
34
+
35
+ The primary goal of this dataset was to align the cross-lingual representations of a multilingual base encoder (`jhu-clsp/mmBERT-base`) with the agglutinative complexity and morphological nuances of the Turkish language, without inducing catastrophic forgetting.
36
+
37
+ ## 📊 Dataset Composition
38
+
39
+ To ensure a balance between structured encyclopedic knowledge and natural, diverse web/news usage, the corpus is a composite of three primary open-source collections. It contains a total of **approximately 2 million sequences**.
40
+
41
+ * **Turkish Wikipedia (~406,000 sequences):** Sourced from the standard encyclopedic subset from the Wikimedia Foundation. It provides high-quality, factual, and structurally sound Turkish text.
42
+ * **Havadis & Temiz-OSCAR (~1,600,000 sequences):** * *Havadis:* A robust dataset of Turkish news articles providing formal and contemporary language usage.
43
+ * *Temiz-OSCAR:* A heavily filtered and cleaned version of the Common Crawl-based Turkish OSCAR corpus, representing diverse internet text.
44
+ * These two sources were merged, filtered, and uniformly sampled to extract 1.6 million high-quality sequences.
45
+
46
+ ## ⚙️ Preprocessing & Curation Strategy
47
+
48
+ The data was strictly curated to match the architectural constraints of the base Masked Diffusion Language Model (MDLM):
49
+
50
+ 1. **Length Filtering:** To ensure compatibility and training stability, a strict length constraint was applied across all data sources. Any sequences exceeding a **maximum token length of 512** were filtered out.
51
+ 2. **Tokenization Alignment:** The text was tokenized using the `jhu-clsp/mmBERT-base` tokenizer. This was a crucial step to maintain absolute alignment with the pre-trained embedding space of the frozen backbone.
52
+ 3. **Shuffling & Distribution:** The web and news subsets were thoroughly shuffled prior to sampling to ensure distributional uniformity during the training process.
53
+
54
+ ## 🚀 Intended Use
55
+
56
+ This corpus is optimized for:
57
+ * **Continual Pre-Training (CPT):** Adapting existing multilingual or general-purpose encoders to the Turkish language.
58
+ * **Masked Language Modeling (MLM):** Training models to predict masked or corrupted tokens (the foundational mechanism of discrete diffusion models).
59
+ * **Domain Adaptation:** Serving as a baseline corpus for general Turkish language modeling before task-specific instruction tuning.
60
+
61
+ ## ⚠️ Limitations
62
+
63
+ * **Length Constraint:** The dataset inherently lacks long-form document structures, as all sequences are hard-capped at 512 tokens. It is not suitable for training long-context models without additional data.
64
+ * **Tokenization:** While provided as text, researchers should be aware that the length filters were applied based on the specific subword tokenization of `mmBERT`. Re-tokenizing with a different tokenizer (like LLaMA's or a custom BPE) may yield different sequence lengths.
65
+
66
+ ## 📝 Citation
67
+
68
+ If you use this dataset in your research, please cite the Diffutron paper:
69
+
70
+ ```bibtex
71
+ @misc{diffutron2026,
72
+ author = {Kocabay, Şuayp Talha and Akkuş, Talha Rüzgar},
73
+ title = {Diffutron: A Masked Diffusion Language Model for Turkish Language},
74
+ year = {2026},
75
+ publisher = {Hugging Face},
76
+ howpublished = {\url{[https://huggingface.co/collections/diffutron/diffutronlm](https://huggingface.co/collections/diffutron/diffutronlm)}}
77
+ }