afkfatih commited on
Commit
7f97191
·
verified ·
1 Parent(s): 788c69d

Add README

Browse files
Files changed (1) hide show
  1. README.md +72 -15
README.md CHANGED
@@ -1,17 +1,74 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: text
5
- dtype: string
6
- splits:
7
- - name: train
8
- num_bytes: 5983244160
9
- num_examples: 1908378
10
- download_size: 3635810801
11
- dataset_size: 5983244160
12
- configs:
13
- - config_name: default
14
- data_files:
15
- - split: train
16
- path: data/train-*
17
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - tr
4
+ - en
5
+ license: cc-by-4.0
6
+ task_categories:
7
+ - text-generation
8
+ tags:
9
+ - turkish
10
+ - continual-pretraining
11
+ - CPT
12
+ - wikipedia
13
+ - fineweb
14
+ - c4
15
+ size_categories:
16
+ - 1B<n<10B
17
  ---
18
+
19
+ # Turkish CPT Dataset
20
+
21
+ A high-quality Turkish + English dataset for Continued Pre-Training (CPT) of language models.
22
+
23
+ ## Dataset Summary
24
+
25
+ | Property | Value |
26
+ |---|---|
27
+ | Total examples | 1,908,378 |
28
+ | Total tokens | ~2.19B |
29
+ | Turkish ratio | ~80% |
30
+ | English ratio | ~20% |
31
+ | Languages | Turkish, English |
32
+
33
+ ## Sources
34
+
35
+ | Source | Language | Examples | Description |
36
+ |---|---|---|---|
37
+ | [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) (tr) | TR | ~534K | Turkish Wikipedia |
38
+ | [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) (en) | EN | ~134K | English Wikipedia (20% replay) |
39
+ | [habanoz/c4_tr_fineweb_plus](https://huggingface.co/datasets/habanoz/c4_tr_fineweb_plus) | TR | ~500K | Filtered Turkish web text |
40
+ | [HuggingFaceFW/fineweb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) (tur_Latn) | TR | ~500K | High-quality Turkish web data |
41
+ | [HuggingFaceFW/fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) | EN | ~300K | High-quality English web data |
42
+
43
+ ## Cleaning Pipeline
44
+
45
+ Applied industry-standard cleaning steps:
46
+ - **UTF-8 NFC normalization** — unicode noise removal
47
+ - **Whitespace normalization** — excess newlines, tabs, spaces
48
+ - **URL removal** — web boilerplate
49
+ - **Alphanumeric ratio filter** — spam/symbol detection (min 50%)
50
+ - **Repetitive line filter** — boilerplate detection (min 30% unique lines)
51
+ - **Minimum 50 tokens** — very short text removal
52
+ - **Maximum 100K tokens** — abnormally long document removal
53
+
54
+ 60,357 examples removed (3.1%), 1.3% token loss.
55
+
56
+ ## English Replay
57
+
58
+ 20% English data is mixed in following best practices from continual pretraining research to prevent **catastrophic forgetting** of the base model's reasoning capabilities.
59
+
60
+ ## Usage
61
+
62
+ ```python
63
+ from datasets import load_dataset
64
+
65
+ dataset = load_dataset("afkfatih/turkish-cpt-dataset", split="train")
66
+ ```
67
+
68
+ ## Intended Use
69
+
70
+ This dataset is intended for CPT of instruction-tuned or base language models to improve Turkish language understanding and generation while preserving English capabilities.
71
+
72
+ ## License
73
+
74
+ Dataset components inherit their original licenses. Compiled dataset released under CC-BY-4.0.