Rai220 commited on
Commit
3448fa4
·
verified ·
1 Parent(s): 473c9c7

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,3 +1,5 @@
1
- Nano GPT Model
 
 
2
 
3
  I did it for lulz
 
1
+ # KrestGPT
2
+
3
+ Обученая модель NanoGPT из примера Andrej Karpathy: https://github.com/karpathy/nanochat/tree/master
4
 
5
  I did it for lulz
report/base-model-evaluation.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Base model evaluation
2
+ timestamp: 2025-10-15 11:43:07
3
+
4
+ - Model: base_model (step 21400)
5
+ - CORE metric: 0.2069
6
+ - hellaswag_zeroshot: 0.2584
7
+ - jeopardy: 0.0619
8
+ - bigbench_qa_wikidata: 0.5099
9
+ - arc_easy: 0.5146
10
+ - arc_challenge: 0.1251
11
+ - copa: 0.2800
12
+ - commonsense_qa: 0.2342
13
+ - piqa: 0.3711
14
+ - openbook_qa: 0.1227
15
+ - lambada_openai: 0.3761
16
+ - hellaswag: 0.2569
17
+ - winograd: 0.2821
18
+ - winogrande: 0.0718
19
+ - bigbench_dyck_languages: 0.1070
20
+ - agi_eval_lsat_ar: 0.0924
21
+ - bigbench_cs_algorithms: 0.3765
22
+ - bigbench_operators: 0.1524
23
+ - bigbench_repeat_copy_logic: 0.0000
24
+ - squad: 0.2328
25
+ - coqa: 0.2036
26
+ - boolq: -0.2587
27
+ - bigbench_language_identification: 0.1816
28
+
report/base-model-loss.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Base model loss
2
+ timestamp: 2025-10-15 11:14:18
3
+
4
+ - train bpb: 0.8171
5
+ - val bpb: 0.8145
6
+ - sample 0: <|bos|>The capital of France is Paris, which is located in the north of the country. The city is the
7
+ - sample 1: <|bos|>The chemical symbol of gold is Au. The chemical symbol of gold is Au. The chemical symbol of gold is
8
+ - sample 2: <|bos|>If yesterday was Friday, then tomorrow will be Friday. That’s because tomorrow is Friday, and Friday is Friday. That’s
9
+ - sample 3: <|bos|>The opposite of hot is cold. The opposite of cold is hot. The opposite of hot is cold.
10
+ - sample 4: <|bos|>The planets of the solar system are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune,
11
+ - sample 5: <|bos|>My favorite color is red. I am a redhead. I am a redhead. I am
12
+ - sample 6: <|bos|>If 5*x + 3 = 13, then x is 13.
13
+ If 5*x + 3 = 13, then
14
+
report/base-model-training.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Base model training
2
+ timestamp: 2025-10-15 11:11:26
3
+
4
+ - run: d26
5
+ - depth: 20
6
+ - max_seq_len: 2048
7
+ - num_iterations: -1
8
+ - target_flops: -1.0000
9
+ - target_param_data_ratio: 20
10
+ - device_batch_size: 32
11
+ - total_batch_size: 524,288
12
+ - embedding_lr: 0.2000
13
+ - unembedding_lr: 0.0040
14
+ - weight_decay: 0.0000
15
+ - matrix_lr: 0.0200
16
+ - grad_clip: 1.0000
17
+ - eval_every: 250
18
+ - eval_tokens: 10,485,760
19
+ - core_metric_every: 2000
20
+ - core_metric_max_per_task: 500
21
+ - sample_every: 2000
22
+ - model_tag:
23
+ - Number of parameters: 560,988,160
24
+ - Number of FLOPs per token: 3.491758e+09
25
+ - Calculated number of iterations: 21,400
26
+ - Number of training tokens: 11,219,763,200
27
+ - Tokens : Params ratio: 20.0000
28
+ - DDP world size: 1
29
+ - warmup_ratio: 0.0000
30
+ - warmdown_ratio: 0.2000
31
+ - final_lr_frac: 0.0000
32
+ - Minimum validation bpb: 0.8143
33
+ - Final validation bpb: 0.8143
34
+ - CORE metric estimate: 0.2122
35
+ - MFU %: 46.54%
36
+ - Total training flops: 3.917670e+19
37
+ - Total training time: 1331.57m
38
+ - Peak memory usage: 79349.52MiB
39
+
report/chat-evaluation-mid.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Chat evaluation mid
2
+ timestamp: 2025-10-15 13:24:50
3
+
4
+ - source: mid
5
+ - task_name: None
6
+ - dtype: bfloat16
7
+ - temperature: 0.0000
8
+ - max_new_tokens: 512
9
+ - num_samples: 1
10
+ - top_k: 50
11
+ - batch_size: 8
12
+ - model_tag: None
13
+ - step: None
14
+ - max_problems: None
15
+ - ARC-Easy: 0.3906
16
+ - ARC-Challenge: 0.2739
17
+ - MMLU: 0.3094
18
+ - GSM8K: 0.0273
19
+ - HumanEval: 0.0671
20
+ - ChatCORE metric: 0.0786
21
+
report/chat-evaluation-sft.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Chat evaluation sft
2
+ timestamp: 2025-10-15 14:40:31
3
+
4
+ - source: sft
5
+ - task_name: None
6
+ - dtype: bfloat16
7
+ - temperature: 0.0000
8
+ - max_new_tokens: 512
9
+ - num_samples: 1
10
+ - top_k: 50
11
+ - batch_size: 8
12
+ - model_tag: None
13
+ - step: None
14
+ - max_problems: None
15
+ - ARC-Easy: 0.4154
16
+ - ARC-Challenge: 0.3157
17
+ - MMLU: 0.3198
18
+ - GSM8K: 0.0387
19
+ - HumanEval: 0.0732
20
+ - ChatCORE metric: 0.1026
21
+
report/chat-sft.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Chat SFT
2
+ timestamp: 2025-10-15 13:53:40
3
+
4
+ - run: d26
5
+ - source: mid
6
+ - dtype: bfloat16
7
+ - device_batch_size: 4
8
+ - num_epochs: 1
9
+ - max_iterations: -1
10
+ - target_examples_per_step: 32
11
+ - unembedding_lr: 0.0040
12
+ - embedding_lr: 0.2000
13
+ - matrix_lr: 0.0200
14
+ - weight_decay: 0.0000
15
+ - init_lr_frac: 0.0200
16
+ - eval_every: 100
17
+ - eval_steps: 100
18
+ - eval_metrics_every: 200
19
+ - Training rows: 20,843
20
+ - Number of iterations: 651
21
+ - Training loss: 1.0478
22
+ - Validation loss: 1.0655
23
+
report/header.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # nanochat training report
2
+
3
+ Generated: 2025-10-14 11:16:29
4
+
5
+ ## Environment
6
+
7
+ ### Git Information
8
+ - Branch: master
9
+ - Commit: dd6ff9a (dirty)
10
+ - Message: fix bug in fallback case of find_largest_model
11
+
12
+ ### Hardware
13
+ - Platform: Linux
14
+ - CPUs: 20 cores (20 logical)
15
+ - Memory: 235.9 GB
16
+ - GPUs: 1x NVIDIA H100 80GB HBM3
17
+ - GPU Memory: 79.2 GB total
18
+ - CUDA Version: 12.8
19
+ - Hourly Rate: $3.00/hour
20
+
21
+ ### Software
22
+ - Python: 3.10.12
23
+ - PyTorch: 2.8.0+cu128
24
+
25
+
26
+ ### Bloat
27
+ - Characters: 39,122
28
+ - Lines: 899
29
+ - Files: 9
30
+ - Tokens (approx): 9,780
31
+ - Dependencies (uv.lock lines): 2,004
32
+
33
+ Run started: 2025-10-14 11:16:29
34
+
35
+ ---
36
+
report/midtraining.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Midtraining
2
+ timestamp: 2025-10-15 12:37:55
3
+
4
+ - run: d26
5
+ - dtype: bfloat16
6
+ - max_seq_len: 2048
7
+ - device_batch_size: 32
8
+ - unembedding_lr: 0.0040
9
+ - embedding_lr: 0.2000
10
+ - matrix_lr: 0.0200
11
+ - init_lr_frac: 1.0000
12
+ - weight_decay: 0.0000
13
+ - final_lr_frac: 0.0000
14
+ - eval_every: 150
15
+ - eval_tokens: 10,485,760
16
+ - total_batch_size: 524,288
17
+ - Number of iterations: 771
18
+ - DDP world size: 1
19
+ - Minimum validation bpb: 0.4171
20
+
report/report.md ADDED
@@ -0,0 +1,270 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # nanochat training report
2
+
3
+ Generated: 2025-10-14 11:16:29
4
+
5
+ ## Environment
6
+
7
+ ### Git Information
8
+ - Branch: master
9
+ - Commit: dd6ff9a (dirty)
10
+ - Message: fix bug in fallback case of find_largest_model
11
+
12
+ ### Hardware
13
+ - Platform: Linux
14
+ - CPUs: 20 cores (20 logical)
15
+ - Memory: 235.9 GB
16
+ - GPUs: 1x NVIDIA H100 80GB HBM3
17
+ - GPU Memory: 79.2 GB total
18
+ - CUDA Version: 12.8
19
+ - Hourly Rate: $3.00/hour
20
+
21
+ ### Software
22
+ - Python: 3.10.12
23
+ - PyTorch: 2.8.0+cu128
24
+
25
+
26
+ ### Bloat
27
+ - Characters: 39,122
28
+ - Lines: 899
29
+ - Files: 9
30
+ - Tokens (approx): 9,780
31
+ - Dependencies (uv.lock lines): 2,004
32
+
33
+ Run started: 2025-10-14 11:16:29
34
+
35
+ ---
36
+
37
+ ## Tokenizer training
38
+ timestamp: 2025-10-14 11:17:38
39
+
40
+ - max_chars: 2,000,000,000
41
+ - doc_cap: 10,000
42
+ - vocab_size: 65,536
43
+ - train_time: 65.5560
44
+ - num_special_tokens: 9
45
+ - token_bytes_min: 1
46
+ - token_bytes_max: 32
47
+ - token_bytes_mean: 6.9151
48
+ - token_bytes_std: 2.8736
49
+
50
+
51
+ ## Tokenizer evaluation
52
+ timestamp: 2025-10-14 11:17:42
53
+
54
+ ### Comparison with GPT-2
55
+
56
+ | Text Type | Bytes | GPT-2 Tokens | GPT-2 Ratio | Ours Tokens | Ours Ratio | Relative Diff % |
57
+ |-----------|-------|--------------|--------------|-------------|------------|-----------------|
58
+ | news | 1819 | 404 | 4.50 | 375 | 4.85 | +7.2% |
59
+ | korean | 893 | 745 | 1.20 | 721 | 1.24 | +3.2% |
60
+ | code | 1259 | 576 | 2.19 | 493 | 2.55 | +14.4% |
61
+ | math | 1834 | 936 | 1.96 | 966 | 1.90 | -3.2% |
62
+ | science | 1112 | 260 | 4.28 | 225 | 4.94 | +13.5% |
63
+ | fwe-train | 4208518 | 900364 | 4.67 | 856901 | 4.91 | +4.8% |
64
+ | fwe-val | 4908443 | 1059062 | 4.63 | 1010356 | 4.86 | +4.6% |
65
+
66
+ ### Comparison with GPT-4
67
+
68
+ | Text Type | Bytes | GPT-4 Tokens | GPT-4 Ratio | Ours Tokens | Ours Ratio | Relative Diff % |
69
+ |-----------|-------|--------------|--------------|-------------|------------|-----------------|
70
+ | news | 1819 | 387 | 4.70 | 375 | 4.85 | +3.1% |
71
+ | korean | 893 | 364 | 2.45 | 721 | 1.24 | -98.1% |
72
+ | code | 1259 | 309 | 4.07 | 493 | 2.55 | -59.5% |
73
+ | math | 1834 | 832 | 2.20 | 966 | 1.90 | -16.1% |
74
+ | science | 1112 | 249 | 4.47 | 225 | 4.94 | +9.6% |
75
+ | fwe-train | 4208518 | 874799 | 4.81 | 856901 | 4.91 | +2.0% |
76
+ | fwe-val | 4908443 | 1029691 | 4.77 | 1010356 | 4.86 | +1.9% |
77
+
78
+
79
+ ## Base model training
80
+ timestamp: 2025-10-15 11:11:26
81
+
82
+ - run: d26
83
+ - depth: 20
84
+ - max_seq_len: 2048
85
+ - num_iterations: -1
86
+ - target_flops: -1.0000
87
+ - target_param_data_ratio: 20
88
+ - device_batch_size: 32
89
+ - total_batch_size: 524,288
90
+ - embedding_lr: 0.2000
91
+ - unembedding_lr: 0.0040
92
+ - weight_decay: 0.0000
93
+ - matrix_lr: 0.0200
94
+ - grad_clip: 1.0000
95
+ - eval_every: 250
96
+ - eval_tokens: 10,485,760
97
+ - core_metric_every: 2000
98
+ - core_metric_max_per_task: 500
99
+ - sample_every: 2000
100
+ - model_tag:
101
+ - Number of parameters: 560,988,160
102
+ - Number of FLOPs per token: 3.491758e+09
103
+ - Calculated number of iterations: 21,400
104
+ - Number of training tokens: 11,219,763,200
105
+ - Tokens : Params ratio: 20.0000
106
+ - DDP world size: 1
107
+ - warmup_ratio: 0.0000
108
+ - warmdown_ratio: 0.2000
109
+ - final_lr_frac: 0.0000
110
+ - Minimum validation bpb: 0.8143
111
+ - Final validation bpb: 0.8143
112
+ - CORE metric estimate: 0.2122
113
+ - MFU %: 46.54%
114
+ - Total training flops: 3.917670e+19
115
+ - Total training time: 1331.57m
116
+ - Peak memory usage: 79349.52MiB
117
+
118
+
119
+ ## Base model loss
120
+ timestamp: 2025-10-15 11:14:18
121
+
122
+ - train bpb: 0.8171
123
+ - val bpb: 0.8145
124
+ - sample 0: <|bos|>The capital of France is Paris, which is located in the north of the country. The city is the
125
+ - sample 1: <|bos|>The chemical symbol of gold is Au. The chemical symbol of gold is Au. The chemical symbol of gold is
126
+ - sample 2: <|bos|>If yesterday was Friday, then tomorrow will be Friday. That’s because tomorrow is Friday, and Friday is Friday. That’s
127
+ - sample 3: <|bos|>The opposite of hot is cold. The opposite of cold is hot. The opposite of hot is cold.
128
+ - sample 4: <|bos|>The planets of the solar system are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune,
129
+ - sample 5: <|bos|>My favorite color is red. I am a redhead. I am a redhead. I am
130
+ - sample 6: <|bos|>If 5*x + 3 = 13, then x is 13.
131
+ If 5*x + 3 = 13, then
132
+
133
+
134
+ ## Base model evaluation
135
+ timestamp: 2025-10-15 11:43:07
136
+
137
+ - Model: base_model (step 21400)
138
+ - CORE metric: 0.2069
139
+ - hellaswag_zeroshot: 0.2584
140
+ - jeopardy: 0.0619
141
+ - bigbench_qa_wikidata: 0.5099
142
+ - arc_easy: 0.5146
143
+ - arc_challenge: 0.1251
144
+ - copa: 0.2800
145
+ - commonsense_qa: 0.2342
146
+ - piqa: 0.3711
147
+ - openbook_qa: 0.1227
148
+ - lambada_openai: 0.3761
149
+ - hellaswag: 0.2569
150
+ - winograd: 0.2821
151
+ - winogrande: 0.0718
152
+ - bigbench_dyck_languages: 0.1070
153
+ - agi_eval_lsat_ar: 0.0924
154
+ - bigbench_cs_algorithms: 0.3765
155
+ - bigbench_operators: 0.1524
156
+ - bigbench_repeat_copy_logic: 0.0000
157
+ - squad: 0.2328
158
+ - coqa: 0.2036
159
+ - boolq: -0.2587
160
+ - bigbench_language_identification: 0.1816
161
+
162
+
163
+ ## Midtraining
164
+ timestamp: 2025-10-15 12:37:55
165
+
166
+ - run: d26
167
+ - dtype: bfloat16
168
+ - max_seq_len: 2048
169
+ - device_batch_size: 32
170
+ - unembedding_lr: 0.0040
171
+ - embedding_lr: 0.2000
172
+ - matrix_lr: 0.0200
173
+ - init_lr_frac: 1.0000
174
+ - weight_decay: 0.0000
175
+ - final_lr_frac: 0.0000
176
+ - eval_every: 150
177
+ - eval_tokens: 10,485,760
178
+ - total_batch_size: 524,288
179
+ - Number of iterations: 771
180
+ - DDP world size: 1
181
+ - Minimum validation bpb: 0.4171
182
+
183
+
184
+ ## Chat evaluation mid
185
+ timestamp: 2025-10-15 13:24:50
186
+
187
+ - source: mid
188
+ - task_name: None
189
+ - dtype: bfloat16
190
+ - temperature: 0.0000
191
+ - max_new_tokens: 512
192
+ - num_samples: 1
193
+ - top_k: 50
194
+ - batch_size: 8
195
+ - model_tag: None
196
+ - step: None
197
+ - max_problems: None
198
+ - ARC-Easy: 0.3906
199
+ - ARC-Challenge: 0.2739
200
+ - MMLU: 0.3094
201
+ - GSM8K: 0.0273
202
+ - HumanEval: 0.0671
203
+ - ChatCORE metric: 0.0786
204
+
205
+
206
+ ## Chat SFT
207
+ timestamp: 2025-10-15 13:53:40
208
+
209
+ - run: d26
210
+ - source: mid
211
+ - dtype: bfloat16
212
+ - device_batch_size: 4
213
+ - num_epochs: 1
214
+ - max_iterations: -1
215
+ - target_examples_per_step: 32
216
+ - unembedding_lr: 0.0040
217
+ - embedding_lr: 0.2000
218
+ - matrix_lr: 0.0200
219
+ - weight_decay: 0.0000
220
+ - init_lr_frac: 0.0200
221
+ - eval_every: 100
222
+ - eval_steps: 100
223
+ - eval_metrics_every: 200
224
+ - Training rows: 20,843
225
+ - Number of iterations: 651
226
+ - Training loss: 1.0478
227
+ - Validation loss: 1.0655
228
+
229
+
230
+ ## Chat evaluation sft
231
+ timestamp: 2025-10-15 14:40:31
232
+
233
+ - source: sft
234
+ - task_name: None
235
+ - dtype: bfloat16
236
+ - temperature: 0.0000
237
+ - max_new_tokens: 512
238
+ - num_samples: 1
239
+ - top_k: 50
240
+ - batch_size: 8
241
+ - model_tag: None
242
+ - step: None
243
+ - max_problems: None
244
+ - ARC-Easy: 0.4154
245
+ - ARC-Challenge: 0.3157
246
+ - MMLU: 0.3198
247
+ - GSM8K: 0.0387
248
+ - HumanEval: 0.0732
249
+ - ChatCORE metric: 0.1026
250
+
251
+
252
+ ## Summary
253
+
254
+ - Characters: 39,122
255
+ - Lines: 899
256
+ - Files: 9
257
+ - Tokens (approx): 9,780
258
+ - Dependencies (uv.lock lines): 2,004
259
+
260
+ | Metric | BASE | MID | SFT | RL |
261
+ |-----------------|----------|----------|----------|----------|
262
+ | CORE | 0.2069 | - | - | - |
263
+ | ARC-Challenge | - | 0.2739 | 0.3157 | - |
264
+ | ARC-Easy | - | 0.3906 | 0.4154 | - |
265
+ | GSM8K | - | 0.0273 | 0.0387 | - |
266
+ | HumanEval | - | 0.0671 | 0.0732 | - |
267
+ | MMLU | - | 0.3094 | 0.3198 | - |
268
+ | ChatCORE | - | 0.0786 | 0.1026 | - |
269
+
270
+ Total wall clock time: 27h24m
report/tokenizer-evaluation.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Tokenizer evaluation
2
+ timestamp: 2025-10-14 11:17:42
3
+
4
+ ### Comparison with GPT-2
5
+
6
+ | Text Type | Bytes | GPT-2 Tokens | GPT-2 Ratio | Ours Tokens | Ours Ratio | Relative Diff % |
7
+ |-----------|-------|--------------|--------------|-------------|------------|-----------------|
8
+ | news | 1819 | 404 | 4.50 | 375 | 4.85 | +7.2% |
9
+ | korean | 893 | 745 | 1.20 | 721 | 1.24 | +3.2% |
10
+ | code | 1259 | 576 | 2.19 | 493 | 2.55 | +14.4% |
11
+ | math | 1834 | 936 | 1.96 | 966 | 1.90 | -3.2% |
12
+ | science | 1112 | 260 | 4.28 | 225 | 4.94 | +13.5% |
13
+ | fwe-train | 4208518 | 900364 | 4.67 | 856901 | 4.91 | +4.8% |
14
+ | fwe-val | 4908443 | 1059062 | 4.63 | 1010356 | 4.86 | +4.6% |
15
+
16
+ ### Comparison with GPT-4
17
+
18
+ | Text Type | Bytes | GPT-4 Tokens | GPT-4 Ratio | Ours Tokens | Ours Ratio | Relative Diff % |
19
+ |-----------|-------|--------------|--------------|-------------|------------|-----------------|
20
+ | news | 1819 | 387 | 4.70 | 375 | 4.85 | +3.1% |
21
+ | korean | 893 | 364 | 2.45 | 721 | 1.24 | -98.1% |
22
+ | code | 1259 | 309 | 4.07 | 493 | 2.55 | -59.5% |
23
+ | math | 1834 | 832 | 2.20 | 966 | 1.90 | -16.1% |
24
+ | science | 1112 | 249 | 4.47 | 225 | 4.94 | +9.6% |
25
+ | fwe-train | 4208518 | 874799 | 4.81 | 856901 | 4.91 | +2.0% |
26
+ | fwe-val | 4908443 | 1029691 | 4.77 | 1010356 | 4.86 | +1.9% |
27
+
report/tokenizer-training.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Tokenizer training
2
+ timestamp: 2025-10-14 11:17:38
3
+
4
+ - max_chars: 2,000,000,000
5
+ - doc_cap: 10,000
6
+ - vocab_size: 65,536
7
+ - train_time: 65.5560
8
+ - num_special_tokens: 9
9
+ - token_bytes_min: 1
10
+ - token_bytes_max: 32
11
+ - token_bytes_mean: 6.9151
12
+ - token_bytes_std: 2.8736
13
+