melihcatal commited on
Commit
9ba7718
·
verified ·
1 Parent(s): 807ca82

Add files using upload-large-folder tool

Browse files
Files changed (28) hide show
  1. .gitattributes +1 -0
  2. qwen3-4b-instruct/base/adapter/README.md +207 -0
  3. qwen3-4b-instruct/base/adapter/adapter_config.json +46 -0
  4. qwen3-4b-instruct/base/adapter/adapter_model.safetensors +3 -0
  5. qwen3-4b-instruct/base/audit_results.json +137 -0
  6. qwen3-4b-instruct/base/audit_scores.npz +3 -0
  7. qwen3-4b-instruct/base/canary_meta.json +0 -0
  8. qwen3-4b-instruct/base/codecarbon.csv +2 -0
  9. qwen3-4b-instruct/base/epochs/epoch_001/adapter/README.md +207 -0
  10. qwen3-4b-instruct/base/epochs/epoch_001/adapter/adapter_config.json +46 -0
  11. qwen3-4b-instruct/base/epochs/epoch_001/adapter/adapter_model.safetensors +3 -0
  12. qwen3-4b-instruct/base/epochs/epoch_001/audit_results.json +137 -0
  13. qwen3-4b-instruct/base/epochs/epoch_001/audit_scores.npz +3 -0
  14. qwen3-4b-instruct/base/epochs/epoch_002/adapter/README.md +207 -0
  15. qwen3-4b-instruct/base/epochs/epoch_002/adapter/adapter_config.json +46 -0
  16. qwen3-4b-instruct/base/epochs/epoch_002/adapter/adapter_model.safetensors +3 -0
  17. qwen3-4b-instruct/base/epochs/epoch_002/audit_results.json +137 -0
  18. qwen3-4b-instruct/base/epochs/epoch_002/audit_scores.npz +3 -0
  19. qwen3-4b-instruct/base/metrics.jsonl +49 -0
  20. qwen3-4b-instruct/base/pretrain_lm_head.pt +3 -0
  21. qwen3-4b-instruct/base/resolved_config.yaml +100 -0
  22. qwen3-4b-instruct/base/scalars.csv +537 -0
  23. qwen3-4b-instruct/base/summary.json +71 -0
  24. qwen3-4b-instruct/base/tensorboard/events.out.tfevents.1773761739.7b654b6988b0.32156.0 +3 -0
  25. qwen3-4b-instruct/base/tokenizer/chat_template.jinja +61 -0
  26. qwen3-4b-instruct/base/tokenizer/tokenizer.json +3 -0
  27. qwen3-4b-instruct/base/tokenizer/tokenizer_config.json +516 -0
  28. qwen3-4b-instruct/base/train.log +43 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ qwen3-4b-instruct/base/tokenizer/tokenizer.json filter=lfs diff=lfs merge=lfs -text
qwen3-4b-instruct/base/adapter/README.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen3-4B-Instruct-2507
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:Qwen/Qwen3-4B-Instruct-2507
7
+ - lora
8
+ - transformers
9
+ ---
10
+
11
+ # Model Card for Model ID
12
+
13
+ <!-- Provide a quick summary of what the model is/does. -->
14
+
15
+
16
+
17
+ ## Model Details
18
+
19
+ ### Model Description
20
+
21
+ <!-- Provide a longer summary of what this model is. -->
22
+
23
+
24
+
25
+ - **Developed by:** [More Information Needed]
26
+ - **Funded by [optional]:** [More Information Needed]
27
+ - **Shared by [optional]:** [More Information Needed]
28
+ - **Model type:** [More Information Needed]
29
+ - **Language(s) (NLP):** [More Information Needed]
30
+ - **License:** [More Information Needed]
31
+ - **Finetuned from model [optional]:** [More Information Needed]
32
+
33
+ ### Model Sources [optional]
34
+
35
+ <!-- Provide the basic links for the model. -->
36
+
37
+ - **Repository:** [More Information Needed]
38
+ - **Paper [optional]:** [More Information Needed]
39
+ - **Demo [optional]:** [More Information Needed]
40
+
41
+ ## Uses
42
+
43
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
+
45
+ ### Direct Use
46
+
47
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Downstream Use [optional]
52
+
53
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
54
+
55
+ [More Information Needed]
56
+
57
+ ### Out-of-Scope Use
58
+
59
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ## Bias, Risks, and Limitations
64
+
65
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
66
+
67
+ [More Information Needed]
68
+
69
+ ### Recommendations
70
+
71
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
72
+
73
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
+
75
+ ## How to Get Started with the Model
76
+
77
+ Use the code below to get started with the model.
78
+
79
+ [More Information Needed]
80
+
81
+ ## Training Details
82
+
83
+ ### Training Data
84
+
85
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
86
+
87
+ [More Information Needed]
88
+
89
+ ### Training Procedure
90
+
91
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
92
+
93
+ #### Preprocessing [optional]
94
+
95
+ [More Information Needed]
96
+
97
+
98
+ #### Training Hyperparameters
99
+
100
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
101
+
102
+ #### Speeds, Sizes, Times [optional]
103
+
104
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
105
+
106
+ [More Information Needed]
107
+
108
+ ## Evaluation
109
+
110
+ <!-- This section describes the evaluation protocols and provides the results. -->
111
+
112
+ ### Testing Data, Factors & Metrics
113
+
114
+ #### Testing Data
115
+
116
+ <!-- This should link to a Dataset Card if possible. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Factors
121
+
122
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
123
+
124
+ [More Information Needed]
125
+
126
+ #### Metrics
127
+
128
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
129
+
130
+ [More Information Needed]
131
+
132
+ ### Results
133
+
134
+ [More Information Needed]
135
+
136
+ #### Summary
137
+
138
+
139
+
140
+ ## Model Examination [optional]
141
+
142
+ <!-- Relevant interpretability work for the model goes here -->
143
+
144
+ [More Information Needed]
145
+
146
+ ## Environmental Impact
147
+
148
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
149
+
150
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
151
+
152
+ - **Hardware Type:** [More Information Needed]
153
+ - **Hours used:** [More Information Needed]
154
+ - **Cloud Provider:** [More Information Needed]
155
+ - **Compute Region:** [More Information Needed]
156
+ - **Carbon Emitted:** [More Information Needed]
157
+
158
+ ## Technical Specifications [optional]
159
+
160
+ ### Model Architecture and Objective
161
+
162
+ [More Information Needed]
163
+
164
+ ### Compute Infrastructure
165
+
166
+ [More Information Needed]
167
+
168
+ #### Hardware
169
+
170
+ [More Information Needed]
171
+
172
+ #### Software
173
+
174
+ [More Information Needed]
175
+
176
+ ## Citation [optional]
177
+
178
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
179
+
180
+ **BibTeX:**
181
+
182
+ [More Information Needed]
183
+
184
+ **APA:**
185
+
186
+ [More Information Needed]
187
+
188
+ ## Glossary [optional]
189
+
190
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
191
+
192
+ [More Information Needed]
193
+
194
+ ## More Information [optional]
195
+
196
+ [More Information Needed]
197
+
198
+ ## Model Card Authors [optional]
199
+
200
+ [More Information Needed]
201
+
202
+ ## Model Card Contact
203
+
204
+ [More Information Needed]
205
+ ### Framework versions
206
+
207
+ - PEFT 0.18.1
qwen3-4b-instruct/base/adapter/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "Qwen/Qwen3-4B-Instruct-2507",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": true,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 32,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": [
25
+ "lm_head",
26
+ "embed_tokens"
27
+ ],
28
+ "peft_type": "LORA",
29
+ "peft_version": "0.18.1",
30
+ "qalora_group_size": 16,
31
+ "r": 16,
32
+ "rank_pattern": {},
33
+ "revision": null,
34
+ "target_modules": [
35
+ "o_proj",
36
+ "k_proj",
37
+ "v_proj",
38
+ "q_proj"
39
+ ],
40
+ "target_parameters": null,
41
+ "task_type": "CAUSAL_LM",
42
+ "trainable_token_indices": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
qwen3-4b-instruct/base/adapter/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:951990d4c34c593ed5c2777e6ebe986acc6b01f8038c050c69b3cd13c8dbc3af
3
+ size 4721857072
qwen3-4b-instruct/base/audit_results.json ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "delta": 1e-05,
3
+ "num_canaries": 500,
4
+ "num_members": 250,
5
+ "paper_guess_fraction": 0.2,
6
+ "paper_guess_steps": 20,
7
+ "loss": {
8
+ "auc": 0.968584,
9
+ "empirical_epsilon": {
10
+ "0.05": 3.4791953936219215,
11
+ "0.01": 3.023197554051876
12
+ },
13
+ "empirical_epsilon_details": {
14
+ "0.05": {
15
+ "epsilon": 3.4791953936219215,
16
+ "num_guesses": 100,
17
+ "correct_guesses": 100,
18
+ "candidate_num_guesses": [
19
+ 5,
20
+ 10,
21
+ 15,
22
+ 20,
23
+ 25,
24
+ 30,
25
+ 35,
26
+ 40,
27
+ 45,
28
+ 50,
29
+ 55,
30
+ 60,
31
+ 65,
32
+ 70,
33
+ 75,
34
+ 80,
35
+ 85,
36
+ 90,
37
+ 95,
38
+ 100
39
+ ],
40
+ "direction": "lower"
41
+ },
42
+ "0.01": {
43
+ "epsilon": 3.023197554051876,
44
+ "num_guesses": 100,
45
+ "correct_guesses": 100,
46
+ "candidate_num_guesses": [
47
+ 5,
48
+ 10,
49
+ 15,
50
+ 20,
51
+ 25,
52
+ 30,
53
+ 35,
54
+ 40,
55
+ 45,
56
+ 50,
57
+ 55,
58
+ 60,
59
+ 65,
60
+ 70,
61
+ 75,
62
+ 80,
63
+ 85,
64
+ 90,
65
+ 95,
66
+ 100
67
+ ],
68
+ "direction": "lower"
69
+ }
70
+ }
71
+ },
72
+ "embedding": {
73
+ "auc": 0.883776,
74
+ "empirical_epsilon": {
75
+ "0.05": 3.4791953936219215,
76
+ "0.01": 3.023197554051876
77
+ },
78
+ "empirical_epsilon_details": {
79
+ "0.05": {
80
+ "epsilon": 3.4791953936219215,
81
+ "num_guesses": 100,
82
+ "correct_guesses": 100,
83
+ "candidate_num_guesses": [
84
+ 5,
85
+ 10,
86
+ 15,
87
+ 20,
88
+ 25,
89
+ 30,
90
+ 35,
91
+ 40,
92
+ 45,
93
+ 50,
94
+ 55,
95
+ 60,
96
+ 65,
97
+ 70,
98
+ 75,
99
+ 80,
100
+ 85,
101
+ 90,
102
+ 95,
103
+ 100
104
+ ],
105
+ "direction": "lower"
106
+ },
107
+ "0.01": {
108
+ "epsilon": 3.023197554051876,
109
+ "num_guesses": 100,
110
+ "correct_guesses": 100,
111
+ "candidate_num_guesses": [
112
+ 5,
113
+ 10,
114
+ 15,
115
+ 20,
116
+ 25,
117
+ 30,
118
+ 35,
119
+ 40,
120
+ 45,
121
+ 50,
122
+ 55,
123
+ 60,
124
+ 65,
125
+ 70,
126
+ 75,
127
+ 80,
128
+ 85,
129
+ 90,
130
+ 95,
131
+ 100
132
+ ],
133
+ "direction": "lower"
134
+ }
135
+ }
136
+ }
137
+ }
qwen3-4b-instruct/base/audit_scores.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd3dd9fd37e8fbc94a3c62416c334e11928ed1d4d26de64dd88997b4246b9fcb
3
+ size 12784
qwen3-4b-instruct/base/canary_meta.json ADDED
The diff for this file is too large to render. See raw diff
 
qwen3-4b-instruct/base/codecarbon.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ timestamp,project_name,run_id,experiment_id,duration,emissions,emissions_rate,cpu_power,gpu_power,ram_power,cpu_energy,gpu_energy,ram_energy,energy_consumed,water_consumed,country_name,country_iso_code,region,cloud_provider,cloud_region,os,python_version,codecarbon_version,cpu_count,cpu_model,gpu_count,gpu_model,longitude,latitude,ram_total_size,tracking_mode,cpu_utilization_percent,gpu_utilization_percent,ram_utilization_percent,ram_used_gb,on_cloud,pue,wue
2
+ 2026-03-17T16:14:46,codedp-qwen3-4b-instruct-cpt-base,c4455506-be9b-4ebf-9411-44a1ca74c256,5b0fa12a-3dd7-45bb-9766-cc326314d9f1,2345.9966679112986,0.09022432714096462,3.8458847096868924e-05,72.02285277932866,3280.290622412428,54.0,0.045218986505725985,2.137964725370466,0.03390259211605879,2.2170863039922497,0.0,Sweden,SWE,östergötland county,,,Linux-6.8.0-94-generic-x86_64-with-glibc2.39,3.11.0,3.2.3,256,AMD EPYC 9554 64-Core Processor,8,8 x NVIDIA H200,16.1885,58.594,1511.49019241333,machine,3.3142796066695253,88.58721675929884,5.287772552372644,79.7947571596665,N,1.0,0.0
qwen3-4b-instruct/base/epochs/epoch_001/adapter/README.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen3-4B-Instruct-2507
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:Qwen/Qwen3-4B-Instruct-2507
7
+ - lora
8
+ - transformers
9
+ ---
10
+
11
+ # Model Card for Model ID
12
+
13
+ <!-- Provide a quick summary of what the model is/does. -->
14
+
15
+
16
+
17
+ ## Model Details
18
+
19
+ ### Model Description
20
+
21
+ <!-- Provide a longer summary of what this model is. -->
22
+
23
+
24
+
25
+ - **Developed by:** [More Information Needed]
26
+ - **Funded by [optional]:** [More Information Needed]
27
+ - **Shared by [optional]:** [More Information Needed]
28
+ - **Model type:** [More Information Needed]
29
+ - **Language(s) (NLP):** [More Information Needed]
30
+ - **License:** [More Information Needed]
31
+ - **Finetuned from model [optional]:** [More Information Needed]
32
+
33
+ ### Model Sources [optional]
34
+
35
+ <!-- Provide the basic links for the model. -->
36
+
37
+ - **Repository:** [More Information Needed]
38
+ - **Paper [optional]:** [More Information Needed]
39
+ - **Demo [optional]:** [More Information Needed]
40
+
41
+ ## Uses
42
+
43
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
+
45
+ ### Direct Use
46
+
47
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Downstream Use [optional]
52
+
53
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
54
+
55
+ [More Information Needed]
56
+
57
+ ### Out-of-Scope Use
58
+
59
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ## Bias, Risks, and Limitations
64
+
65
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
66
+
67
+ [More Information Needed]
68
+
69
+ ### Recommendations
70
+
71
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
72
+
73
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
+
75
+ ## How to Get Started with the Model
76
+
77
+ Use the code below to get started with the model.
78
+
79
+ [More Information Needed]
80
+
81
+ ## Training Details
82
+
83
+ ### Training Data
84
+
85
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
86
+
87
+ [More Information Needed]
88
+
89
+ ### Training Procedure
90
+
91
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
92
+
93
+ #### Preprocessing [optional]
94
+
95
+ [More Information Needed]
96
+
97
+
98
+ #### Training Hyperparameters
99
+
100
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
101
+
102
+ #### Speeds, Sizes, Times [optional]
103
+
104
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
105
+
106
+ [More Information Needed]
107
+
108
+ ## Evaluation
109
+
110
+ <!-- This section describes the evaluation protocols and provides the results. -->
111
+
112
+ ### Testing Data, Factors & Metrics
113
+
114
+ #### Testing Data
115
+
116
+ <!-- This should link to a Dataset Card if possible. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Factors
121
+
122
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
123
+
124
+ [More Information Needed]
125
+
126
+ #### Metrics
127
+
128
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
129
+
130
+ [More Information Needed]
131
+
132
+ ### Results
133
+
134
+ [More Information Needed]
135
+
136
+ #### Summary
137
+
138
+
139
+
140
+ ## Model Examination [optional]
141
+
142
+ <!-- Relevant interpretability work for the model goes here -->
143
+
144
+ [More Information Needed]
145
+
146
+ ## Environmental Impact
147
+
148
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
149
+
150
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
151
+
152
+ - **Hardware Type:** [More Information Needed]
153
+ - **Hours used:** [More Information Needed]
154
+ - **Cloud Provider:** [More Information Needed]
155
+ - **Compute Region:** [More Information Needed]
156
+ - **Carbon Emitted:** [More Information Needed]
157
+
158
+ ## Technical Specifications [optional]
159
+
160
+ ### Model Architecture and Objective
161
+
162
+ [More Information Needed]
163
+
164
+ ### Compute Infrastructure
165
+
166
+ [More Information Needed]
167
+
168
+ #### Hardware
169
+
170
+ [More Information Needed]
171
+
172
+ #### Software
173
+
174
+ [More Information Needed]
175
+
176
+ ## Citation [optional]
177
+
178
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
179
+
180
+ **BibTeX:**
181
+
182
+ [More Information Needed]
183
+
184
+ **APA:**
185
+
186
+ [More Information Needed]
187
+
188
+ ## Glossary [optional]
189
+
190
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
191
+
192
+ [More Information Needed]
193
+
194
+ ## More Information [optional]
195
+
196
+ [More Information Needed]
197
+
198
+ ## Model Card Authors [optional]
199
+
200
+ [More Information Needed]
201
+
202
+ ## Model Card Contact
203
+
204
+ [More Information Needed]
205
+ ### Framework versions
206
+
207
+ - PEFT 0.18.1
qwen3-4b-instruct/base/epochs/epoch_001/adapter/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "Qwen/Qwen3-4B-Instruct-2507",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": true,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 32,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": [
25
+ "lm_head",
26
+ "embed_tokens"
27
+ ],
28
+ "peft_type": "LORA",
29
+ "peft_version": "0.18.1",
30
+ "qalora_group_size": 16,
31
+ "r": 16,
32
+ "rank_pattern": {},
33
+ "revision": null,
34
+ "target_modules": [
35
+ "o_proj",
36
+ "k_proj",
37
+ "v_proj",
38
+ "q_proj"
39
+ ],
40
+ "target_parameters": null,
41
+ "task_type": "CAUSAL_LM",
42
+ "trainable_token_indices": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
qwen3-4b-instruct/base/epochs/epoch_001/adapter/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d15fef335597efc6c9627b295490c7398f384f70071dc4ff04661f4776a9e5f
3
+ size 4721857072
qwen3-4b-instruct/base/epochs/epoch_001/audit_results.json ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "delta": 1e-05,
3
+ "num_canaries": 500,
4
+ "num_members": 250,
5
+ "paper_guess_fraction": 0.2,
6
+ "paper_guess_steps": 20,
7
+ "loss": {
8
+ "auc": 0.907944,
9
+ "empirical_epsilon": {
10
+ "0.05": 3.4791953936219215,
11
+ "0.01": 3.023197554051876
12
+ },
13
+ "empirical_epsilon_details": {
14
+ "0.05": {
15
+ "epsilon": 3.4791953936219215,
16
+ "num_guesses": 100,
17
+ "correct_guesses": 100,
18
+ "candidate_num_guesses": [
19
+ 5,
20
+ 10,
21
+ 15,
22
+ 20,
23
+ 25,
24
+ 30,
25
+ 35,
26
+ 40,
27
+ 45,
28
+ 50,
29
+ 55,
30
+ 60,
31
+ 65,
32
+ 70,
33
+ 75,
34
+ 80,
35
+ 85,
36
+ 90,
37
+ 95,
38
+ 100
39
+ ],
40
+ "direction": "lower"
41
+ },
42
+ "0.01": {
43
+ "epsilon": 3.023197554051876,
44
+ "num_guesses": 100,
45
+ "correct_guesses": 100,
46
+ "candidate_num_guesses": [
47
+ 5,
48
+ 10,
49
+ 15,
50
+ 20,
51
+ 25,
52
+ 30,
53
+ 35,
54
+ 40,
55
+ 45,
56
+ 50,
57
+ 55,
58
+ 60,
59
+ 65,
60
+ 70,
61
+ 75,
62
+ 80,
63
+ 85,
64
+ 90,
65
+ 95,
66
+ 100
67
+ ],
68
+ "direction": "lower"
69
+ }
70
+ }
71
+ },
72
+ "embedding": {
73
+ "auc": 0.876048,
74
+ "empirical_epsilon": {
75
+ "0.05": 3.4791953936219215,
76
+ "0.01": 3.023197554051876
77
+ },
78
+ "empirical_epsilon_details": {
79
+ "0.05": {
80
+ "epsilon": 3.4791953936219215,
81
+ "num_guesses": 100,
82
+ "correct_guesses": 100,
83
+ "candidate_num_guesses": [
84
+ 5,
85
+ 10,
86
+ 15,
87
+ 20,
88
+ 25,
89
+ 30,
90
+ 35,
91
+ 40,
92
+ 45,
93
+ 50,
94
+ 55,
95
+ 60,
96
+ 65,
97
+ 70,
98
+ 75,
99
+ 80,
100
+ 85,
101
+ 90,
102
+ 95,
103
+ 100
104
+ ],
105
+ "direction": "lower"
106
+ },
107
+ "0.01": {
108
+ "epsilon": 3.023197554051876,
109
+ "num_guesses": 100,
110
+ "correct_guesses": 100,
111
+ "candidate_num_guesses": [
112
+ 5,
113
+ 10,
114
+ 15,
115
+ 20,
116
+ 25,
117
+ 30,
118
+ 35,
119
+ 40,
120
+ 45,
121
+ 50,
122
+ 55,
123
+ 60,
124
+ 65,
125
+ 70,
126
+ 75,
127
+ 80,
128
+ 85,
129
+ 90,
130
+ 95,
131
+ 100
132
+ ],
133
+ "direction": "lower"
134
+ }
135
+ }
136
+ }
137
+ }
qwen3-4b-instruct/base/epochs/epoch_001/audit_scores.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b450e4b28f68344d4b013d7fd537cafa20bac640bcb9c531ee461b22e316ca0
3
+ size 12784
qwen3-4b-instruct/base/epochs/epoch_002/adapter/README.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen3-4B-Instruct-2507
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:Qwen/Qwen3-4B-Instruct-2507
7
+ - lora
8
+ - transformers
9
+ ---
10
+
11
+ # Model Card for Model ID
12
+
13
+ <!-- Provide a quick summary of what the model is/does. -->
14
+
15
+
16
+
17
+ ## Model Details
18
+
19
+ ### Model Description
20
+
21
+ <!-- Provide a longer summary of what this model is. -->
22
+
23
+
24
+
25
+ - **Developed by:** [More Information Needed]
26
+ - **Funded by [optional]:** [More Information Needed]
27
+ - **Shared by [optional]:** [More Information Needed]
28
+ - **Model type:** [More Information Needed]
29
+ - **Language(s) (NLP):** [More Information Needed]
30
+ - **License:** [More Information Needed]
31
+ - **Finetuned from model [optional]:** [More Information Needed]
32
+
33
+ ### Model Sources [optional]
34
+
35
+ <!-- Provide the basic links for the model. -->
36
+
37
+ - **Repository:** [More Information Needed]
38
+ - **Paper [optional]:** [More Information Needed]
39
+ - **Demo [optional]:** [More Information Needed]
40
+
41
+ ## Uses
42
+
43
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
+
45
+ ### Direct Use
46
+
47
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Downstream Use [optional]
52
+
53
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
54
+
55
+ [More Information Needed]
56
+
57
+ ### Out-of-Scope Use
58
+
59
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ## Bias, Risks, and Limitations
64
+
65
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
66
+
67
+ [More Information Needed]
68
+
69
+ ### Recommendations
70
+
71
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
72
+
73
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
+
75
+ ## How to Get Started with the Model
76
+
77
+ Use the code below to get started with the model.
78
+
79
+ [More Information Needed]
80
+
81
+ ## Training Details
82
+
83
+ ### Training Data
84
+
85
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
86
+
87
+ [More Information Needed]
88
+
89
+ ### Training Procedure
90
+
91
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
92
+
93
+ #### Preprocessing [optional]
94
+
95
+ [More Information Needed]
96
+
97
+
98
+ #### Training Hyperparameters
99
+
100
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
101
+
102
+ #### Speeds, Sizes, Times [optional]
103
+
104
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
105
+
106
+ [More Information Needed]
107
+
108
+ ## Evaluation
109
+
110
+ <!-- This section describes the evaluation protocols and provides the results. -->
111
+
112
+ ### Testing Data, Factors & Metrics
113
+
114
+ #### Testing Data
115
+
116
+ <!-- This should link to a Dataset Card if possible. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Factors
121
+
122
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
123
+
124
+ [More Information Needed]
125
+
126
+ #### Metrics
127
+
128
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
129
+
130
+ [More Information Needed]
131
+
132
+ ### Results
133
+
134
+ [More Information Needed]
135
+
136
+ #### Summary
137
+
138
+
139
+
140
+ ## Model Examination [optional]
141
+
142
+ <!-- Relevant interpretability work for the model goes here -->
143
+
144
+ [More Information Needed]
145
+
146
+ ## Environmental Impact
147
+
148
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
149
+
150
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
151
+
152
+ - **Hardware Type:** [More Information Needed]
153
+ - **Hours used:** [More Information Needed]
154
+ - **Cloud Provider:** [More Information Needed]
155
+ - **Compute Region:** [More Information Needed]
156
+ - **Carbon Emitted:** [More Information Needed]
157
+
158
+ ## Technical Specifications [optional]
159
+
160
+ ### Model Architecture and Objective
161
+
162
+ [More Information Needed]
163
+
164
+ ### Compute Infrastructure
165
+
166
+ [More Information Needed]
167
+
168
+ #### Hardware
169
+
170
+ [More Information Needed]
171
+
172
+ #### Software
173
+
174
+ [More Information Needed]
175
+
176
+ ## Citation [optional]
177
+
178
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
179
+
180
+ **BibTeX:**
181
+
182
+ [More Information Needed]
183
+
184
+ **APA:**
185
+
186
+ [More Information Needed]
187
+
188
+ ## Glossary [optional]
189
+
190
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
191
+
192
+ [More Information Needed]
193
+
194
+ ## More Information [optional]
195
+
196
+ [More Information Needed]
197
+
198
+ ## Model Card Authors [optional]
199
+
200
+ [More Information Needed]
201
+
202
+ ## Model Card Contact
203
+
204
+ [More Information Needed]
205
+ ### Framework versions
206
+
207
+ - PEFT 0.18.1
qwen3-4b-instruct/base/epochs/epoch_002/adapter/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "Qwen/Qwen3-4B-Instruct-2507",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": true,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 32,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": [
25
+ "lm_head",
26
+ "embed_tokens"
27
+ ],
28
+ "peft_type": "LORA",
29
+ "peft_version": "0.18.1",
30
+ "qalora_group_size": 16,
31
+ "r": 16,
32
+ "rank_pattern": {},
33
+ "revision": null,
34
+ "target_modules": [
35
+ "o_proj",
36
+ "k_proj",
37
+ "v_proj",
38
+ "q_proj"
39
+ ],
40
+ "target_parameters": null,
41
+ "task_type": "CAUSAL_LM",
42
+ "trainable_token_indices": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
qwen3-4b-instruct/base/epochs/epoch_002/adapter/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:951990d4c34c593ed5c2777e6ebe986acc6b01f8038c050c69b3cd13c8dbc3af
3
+ size 4721857072
qwen3-4b-instruct/base/epochs/epoch_002/audit_results.json ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "delta": 1e-05,
3
+ "num_canaries": 500,
4
+ "num_members": 250,
5
+ "paper_guess_fraction": 0.2,
6
+ "paper_guess_steps": 20,
7
+ "loss": {
8
+ "auc": 0.968584,
9
+ "empirical_epsilon": {
10
+ "0.05": 3.4791953936219215,
11
+ "0.01": 3.023197554051876
12
+ },
13
+ "empirical_epsilon_details": {
14
+ "0.05": {
15
+ "epsilon": 3.4791953936219215,
16
+ "num_guesses": 100,
17
+ "correct_guesses": 100,
18
+ "candidate_num_guesses": [
19
+ 5,
20
+ 10,
21
+ 15,
22
+ 20,
23
+ 25,
24
+ 30,
25
+ 35,
26
+ 40,
27
+ 45,
28
+ 50,
29
+ 55,
30
+ 60,
31
+ 65,
32
+ 70,
33
+ 75,
34
+ 80,
35
+ 85,
36
+ 90,
37
+ 95,
38
+ 100
39
+ ],
40
+ "direction": "lower"
41
+ },
42
+ "0.01": {
43
+ "epsilon": 3.023197554051876,
44
+ "num_guesses": 100,
45
+ "correct_guesses": 100,
46
+ "candidate_num_guesses": [
47
+ 5,
48
+ 10,
49
+ 15,
50
+ 20,
51
+ 25,
52
+ 30,
53
+ 35,
54
+ 40,
55
+ 45,
56
+ 50,
57
+ 55,
58
+ 60,
59
+ 65,
60
+ 70,
61
+ 75,
62
+ 80,
63
+ 85,
64
+ 90,
65
+ 95,
66
+ 100
67
+ ],
68
+ "direction": "lower"
69
+ }
70
+ }
71
+ },
72
+ "embedding": {
73
+ "auc": 0.883776,
74
+ "empirical_epsilon": {
75
+ "0.05": 3.4791953936219215,
76
+ "0.01": 3.023197554051876
77
+ },
78
+ "empirical_epsilon_details": {
79
+ "0.05": {
80
+ "epsilon": 3.4791953936219215,
81
+ "num_guesses": 100,
82
+ "correct_guesses": 100,
83
+ "candidate_num_guesses": [
84
+ 5,
85
+ 10,
86
+ 15,
87
+ 20,
88
+ 25,
89
+ 30,
90
+ 35,
91
+ 40,
92
+ 45,
93
+ 50,
94
+ 55,
95
+ 60,
96
+ 65,
97
+ 70,
98
+ 75,
99
+ 80,
100
+ 85,
101
+ 90,
102
+ 95,
103
+ 100
104
+ ],
105
+ "direction": "lower"
106
+ },
107
+ "0.01": {
108
+ "epsilon": 3.023197554051876,
109
+ "num_guesses": 100,
110
+ "correct_guesses": 100,
111
+ "candidate_num_guesses": [
112
+ 5,
113
+ 10,
114
+ 15,
115
+ 20,
116
+ 25,
117
+ 30,
118
+ 35,
119
+ 40,
120
+ 45,
121
+ 50,
122
+ 55,
123
+ 60,
124
+ 65,
125
+ 70,
126
+ 75,
127
+ 80,
128
+ 85,
129
+ 90,
130
+ 95,
131
+ 100
132
+ ],
133
+ "direction": "lower"
134
+ }
135
+ }
136
+ }
137
+ }
qwen3-4b-instruct/base/epochs/epoch_002/audit_scores.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd3dd9fd37e8fbc94a3c62416c334e11928ed1d4d26de64dd88997b4246b9fcb
3
+ size 12784
qwen3-4b-instruct/base/metrics.jsonl ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"timestamp": 1773761894.8111923, "event": "train_step", "step": 10, "epoch": 1, "metrics": {"train/step_loss": 1.8352766107110416, "train/step_real_loss": 1.028106451034546, "train/lr": 5.2631578947368424e-05, "train/step_canary_loss": 14.75, "perf/step_duration_sec": 6.234770041890442, "perf/samples_per_sec": 5.453288536956348, "perf/tokens_per_sec": 3980.098677781523, "perf/logical_batch_size": 34.0, "perf/logical_token_count": 24815.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
2
+ {"timestamp": 1773761950.7987902, "event": "train_step", "step": 20, "epoch": 1, "metrics": {"train/step_loss": 1.0323970019817352, "train/step_real_loss": 1.0323970019817352, "train/lr": 9.999797424944042e-05, "perf/step_duration_sec": 5.150427320972085, "perf/samples_per_sec": 6.213076703305535, "perf/tokens_per_sec": 4927.9406189561805, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 25381.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
3
+ {"timestamp": 1773762007.3004794, "event": "train_step", "step": 30, "epoch": 1, "metrics": {"train/step_loss": 0.8551503717899323, "train/step_real_loss": 0.8551503717899323, "train/lr": 9.975508273693644e-05, "perf/step_duration_sec": 5.69609066285193, "perf/samples_per_sec": 5.617888108539722, "perf/tokens_per_sec": 4432.689276641233, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 25249.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
4
+ {"timestamp": 1773762065.1568909, "event": "train_step", "step": 40, "epoch": 1, "metrics": {"train/step_loss": 0.8950656801462173, "train/step_real_loss": 0.8950656801462173, "train/lr": 9.910929512300672e-05, "perf/step_duration_sec": 6.2338299779221416, "perf/samples_per_sec": 5.133280842328368, "perf/tokens_per_sec": 4016.4714290693023, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 25038.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
5
+ {"timestamp": 1773762121.2732794, "event": "train_step", "step": 50, "epoch": 1, "metrics": {"train/step_loss": 0.8450518101453781, "train/step_real_loss": 0.8450518101453781, "train/lr": 9.806584072891234e-05, "perf/step_duration_sec": 5.423216213937849, "perf/samples_per_sec": 5.900557664980961, "perf/tokens_per_sec": 5482.724425329497, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 29734.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
6
+ {"timestamp": 1773762134.073646, "event": "eval_step", "step": 50, "epoch": 1, "metrics": {"eval/loss": 0.8533683589253671, "eval/duration_sec": 12.798461285419762}}
7
+ {"timestamp": 1773762190.2760499, "event": "train_step", "step": 60, "epoch": 1, "metrics": {"train/step_loss": 1.114606170943289, "train/step_real_loss": 0.8427969664335251, "train/lr": 9.663316901718597e-05, "train/step_canary_loss": 9.8125, "perf/step_duration_sec": 5.968041606713086, "perf/samples_per_sec": 5.5294520673046765, "perf/tokens_per_sec": 4259.353683360148, "perf/logical_batch_size": 33.0, "perf/logical_token_count": 25420.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
8
+ {"timestamp": 1773762249.207255, "event": "train_step", "step": 70, "epoch": 1, "metrics": {"train/step_loss": 1.1624658794114084, "train/step_real_loss": 0.8745741844177246, "train/lr": 9.48228811713756e-05, "train/step_canary_loss": 10.375, "perf/step_duration_sec": 6.334186799824238, "perf/samples_per_sec": 5.209824250986045, "perf/tokens_per_sec": 3829.378697936894, "perf/logical_batch_size": 33.0, "perf/logical_token_count": 24256.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 16.205660820007324, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
9
+ {"timestamp": 1773762305.719841, "event": "train_step", "step": 80, "epoch": 1, "metrics": {"train/step_loss": 1.157600255573497, "train/step_real_loss": 0.8803409039974213, "train/lr": 9.26496361544538e-05, "train/step_canary_loss": 5.59375, "perf/step_duration_sec": 5.699764240998775, "perf/samples_per_sec": 5.9651590070052, "perf/tokens_per_sec": 4690.720329743854, "perf/logical_batch_size": 34.0, "perf/logical_token_count": 26736.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 16.205660820007324, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
10
+ {"timestamp": 1773762363.2025864, "event": "train_step", "step": 90, "epoch": 1, "metrics": {"train/step_loss": 0.8746908158063889, "train/step_real_loss": 0.8746908158063889, "train/lr": 9.013103200659241e-05, "perf/step_duration_sec": 5.422750173136592, "perf/samples_per_sec": 5.901064769408466, "perf/tokens_per_sec": 4211.147346069116, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 22836.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
11
+ {"timestamp": 1773762419.340239, "event": "train_step", "step": 100, "epoch": 1, "metrics": {"train/step_loss": 1.227062124194521, "train/step_real_loss": 0.9060328304767609, "train/lr": 8.728746334350483e-05, "train/step_canary_loss": 11.5, "perf/step_duration_sec": 5.688522285781801, "perf/samples_per_sec": 5.801155087760134, "perf/tokens_per_sec": 4329.068036096396, "perf/logical_batch_size": 33.0, "perf/logical_token_count": 24626.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 16.205660820007324, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
12
+ {"timestamp": 1773762432.1276581, "event": "eval_step", "step": 100, "epoch": 1, "metrics": {"eval/loss": 0.829470864473245, "eval/duration_sec": 12.785310188308358}}
13
+ {"timestamp": 1773762489.3303852, "event": "train_step", "step": 110, "epoch": 1, "metrics": {"train/step_loss": 1.011215921604272, "train/step_real_loss": 0.9119570553302765, "train/lr": 8.414195620927492e-05, "train/step_canary_loss": 4.1875, "perf/step_duration_sec": 5.6974792359396815, "perf/samples_per_sec": 5.792035149831895, "perf/tokens_per_sec": 4259.603062159705, "perf/logical_batch_size": 33.0, "perf/logical_token_count": 24269.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
14
+ {"timestamp": 1773762544.9202547, "event": "train_step", "step": 120, "epoch": 1, "metrics": {"train/step_loss": 0.7180200964212418, "train/step_real_loss": 0.7180200964212418, "train/lr": 8.071998162096612e-05, "perf/step_duration_sec": 5.694211829919368, "perf/samples_per_sec": 5.619741758088605, "perf/tokens_per_sec": 4598.00245969612, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 26182.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.4762544631958}}
15
+ {"timestamp": 1773762601.86658, "event": "train_step", "step": 130, "epoch": 1, "metrics": {"train/step_loss": 0.8463245183229446, "train/step_real_loss": 0.8463245183229446, "train/lr": 7.704924931484997e-05, "perf/step_duration_sec": 5.1536158989183605, "perf/samples_per_sec": 6.209232629602092, "perf/tokens_per_sec": 4267.683201733388, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 21994.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 101.70386934280396}}
16
+ {"timestamp": 1773762659.5638738, "event": "train_step", "step": 140, "epoch": 1, "metrics": {"train/step_loss": 0.9742496162652969, "train/step_real_loss": 0.9742496162652969, "train/lr": 7.315948336441117e-05, "perf/step_duration_sec": 5.969049285165966, "perf/samples_per_sec": 5.360987733762741, "perf/tokens_per_sec": 3927.091045847888, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 23441.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 101.70386934280396}}
17
+ {"timestamp": 1773762717.6212678, "event": "train_step", "step": 150, "epoch": 1, "metrics": {"train/step_loss": 0.914526179432869, "train/step_real_loss": 0.914526179432869, "train/lr": 6.908218148708247e-05, "perf/step_duration_sec": 5.703813333064318, "perf/samples_per_sec": 5.610281776666824, "perf/tokens_per_sec": 4645.31331108013, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 26496.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 101.70386934280396}}
18
+ {"timestamp": 1773762730.4167268, "event": "eval_step", "step": 150, "epoch": 1, "metrics": {"eval/loss": 0.8174186625923866, "eval/duration_sec": 12.793565314263105}}
19
+ {"timestamp": 1773762785.9700294, "event": "train_step", "step": 160, "epoch": 1, "metrics": {"train/step_loss": 0.9113822728395462, "train/step_real_loss": 0.9113822728395462, "train/lr": 6.485035998874356e-05, "perf/step_duration_sec": 5.425368802621961, "perf/samples_per_sec": 5.898216538668322, "perf/tokens_per_sec": 4679.128907832313, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 25386.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 101.70386934280396}}
20
+ {"timestamp": 1773762841.0786932, "event": "train_step", "step": 170, "epoch": 1, "metrics": {"train/step_loss": 0.8910564256436897, "train/step_real_loss": 0.8441949039697647, "train/lr": 6.049828641131825e-05, "train/step_canary_loss": 2.390625, "perf/step_duration_sec": 5.690398690290749, "perf/samples_per_sec": 5.799242161415912, "perf/tokens_per_sec": 4355.582332445254, "perf/logical_batch_size": 33.0, "perf/logical_token_count": 24785.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 16.205660820007324, "system/cuda_max_memory_allocated_gb": 101.70386934280396}}
21
+ {"timestamp": 1773762898.8441253, "event": "train_step", "step": 180, "epoch": 1, "metrics": {"train/step_loss": 1.0836328773787527, "train/step_real_loss": 0.8538245260715485, "train/lr": 5.6061202048379124e-05, "train/step_canary_loss": 8.4375, "perf/step_duration_sec": 5.692985306028277, "perf/samples_per_sec": 5.7966072677293665, "perf/tokens_per_sec": 4476.7373583439585, "perf/logical_batch_size": 33.0, "perf/logical_token_count": 25486.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 101.70386934280396}}
22
+ {"timestamp": 1773762934.6675656, "event": "train_epoch", "step": 184, "epoch": 1, "metrics": {"train/epoch_loss": 0.9805822407983543, "train/epoch_real_loss": 0.9071368900552877, "train/epoch_canary_loss": 8.42140839386602, "perf/epoch_duration_sec": 1085.6805565529503, "perf/epoch_samples_per_sec": 43.826887856473604, "perf/epoch_tokens_per_sec": 34574.90951038253, "perf/epoch_samples": 47582.0, "perf/epoch_tokens": 37537307.0, "system/cuda_epoch_peak_memory_gb": 101.70386934280396, "eval/loss": 0.8121764394335258, "eval/duration_sec": 12.826699289958924}}
23
+ {"timestamp": 1773762949.1512606, "event": "audit_epoch", "step": 184, "epoch": 1, "metrics": {"audit/delta": 1e-05, "audit/num_canaries": 500.0, "audit/num_members": 250.0, "audit/paper_guess_fraction": 0.2, "audit/paper_guess_steps": 20.0, "audit/loss/auc": 0.907944, "audit/loss/empirical_epsilon/0.05": 3.4791953936219215, "audit/loss/empirical_epsilon/0.01": 3.023197554051876, "audit/loss/empirical_epsilon_details/0.05/epsilon": 3.4791953936219215, "audit/loss/empirical_epsilon_details/0.05/num_guesses": 100.0, "audit/loss/empirical_epsilon_details/0.05/correct_guesses": 100.0, "audit/loss/empirical_epsilon_details/0.01/epsilon": 3.023197554051876, "audit/loss/empirical_epsilon_details/0.01/num_guesses": 100.0, "audit/loss/empirical_epsilon_details/0.01/correct_guesses": 100.0, "audit/embedding/auc": 0.876048, "audit/embedding/empirical_epsilon/0.05": 3.4791953936219215, "audit/embedding/empirical_epsilon/0.01": 3.023197554051876, "audit/embedding/empirical_epsilon_details/0.05/epsilon": 3.4791953936219215, "audit/embedding/empirical_epsilon_details/0.05/num_guesses": 100.0, "audit/embedding/empirical_epsilon_details/0.05/correct_guesses": 100.0, "audit/embedding/empirical_epsilon_details/0.01/epsilon": 3.023197554051876, "audit/embedding/empirical_epsilon_details/0.01/num_guesses": 100.0, "audit/embedding/empirical_epsilon_details/0.01/correct_guesses": 100.0, "perf/audit_duration_sec": 8.130579099990427}}
24
+ {"timestamp": 1773762984.332577, "event": "train_step", "step": 190, "epoch": 2, "metrics": {"train/step_loss": 0.8655764758586884, "train/step_real_loss": 0.8655764758586884, "train/lr": 5.157503657571385e-05, "perf/step_duration_sec": 5.690848938189447, "perf/samples_per_sec": 5.623062630472995, "perf/tokens_per_sec": 4602.301042334944, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 26191.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 87.30217599868774}}
25
+ {"timestamp": 1773763040.7157884, "event": "train_step", "step": 200, "epoch": 2, "metrics": {"train/step_loss": 0.8589679941986547, "train/step_real_loss": 0.8308791071176529, "train/lr": 4.7076117107656534e-05, "train/step_canary_loss": 1.7578125, "perf/step_duration_sec": 5.69332688068971, "perf/samples_per_sec": 5.796259496697344, "perf/tokens_per_sec": 5059.0806752537455, "perf/logical_batch_size": 33.0, "perf/logical_token_count": 28803.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 87.30217599868774}}
26
+ {"timestamp": 1773763053.5208263, "event": "eval_step", "step": 200, "epoch": 2, "metrics": {"eval/loss": 0.8110936123591204, "eval/duration_sec": 12.803085402119905}}
27
+ {"timestamp": 1773763110.1544352, "event": "train_step", "step": 210, "epoch": 2, "metrics": {"train/step_loss": 0.7684839069843292, "train/step_real_loss": 0.7684839069843292, "train/lr": 4.2600874035126046e-05, "perf/step_duration_sec": 5.4159107422456145, "perf/samples_per_sec": 5.908516872405425, "perf/tokens_per_sec": 4630.061534138701, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 25076.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 87.30229806900024}}
28
+ {"timestamp": 1773763168.1105735, "event": "train_step", "step": 220, "epoch": 2, "metrics": {"train/step_loss": 0.8040641099214554, "train/step_real_loss": 0.8040641099214554, "train/lr": 3.818554602737332e-05, "perf/step_duration_sec": 5.967140641994774, "perf/samples_per_sec": 5.36270249351834, "perf/tokens_per_sec": 4430.932935269528, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 26440.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 87.30229806900024}}
29
+ {"timestamp": 1773763224.221138, "event": "train_step", "step": 230, "epoch": 2, "metrics": {"train/step_loss": 0.806927278637886, "train/step_real_loss": 0.806927278637886, "train/lr": 3.386588658621128e-05, "perf/step_duration_sec": 5.424597659613937, "perf/samples_per_sec": 5.899055009782496, "perf/tokens_per_sec": 4482.175734620363, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 24314.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 87.30229806900024}}
30
+ {"timestamp": 1773763282.06351, "event": "train_step", "step": 240, "epoch": 2, "metrics": {"train/step_loss": 0.9124463796615601, "train/step_real_loss": 0.9124463796615601, "train/lr": 2.967687452893051e-05, "perf/step_duration_sec": 5.692487298045307, "perf/samples_per_sec": 5.621444251792743, "perf/tokens_per_sec": 4766.282044988772, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 27132.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
31
+ {"timestamp": 1773763337.7170568, "event": "train_step", "step": 250, "epoch": 2, "metrics": {"train/step_loss": 0.7962393760681152, "train/step_real_loss": 0.7962393760681152, "train/lr": 2.5652430744289756e-05, "perf/step_duration_sec": 5.419141778722405, "perf/samples_per_sec": 5.904994057480479, "perf/tokens_per_sec": 4963.885629569528, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 26900.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
32
+ {"timestamp": 1773763350.5270941, "event": "eval_step", "step": 250, "epoch": 2, "metrics": {"eval/loss": 0.8086002505360506, "eval/duration_sec": 12.808165564201772}}
33
+ {"timestamp": 1773763406.908792, "event": "train_step", "step": 260, "epoch": 2, "metrics": {"train/step_loss": 0.7549401223659515, "train/step_real_loss": 0.7549401223659515, "train/lr": 2.1825143515174878e-05, "perf/step_duration_sec": 5.96535021904856, "perf/samples_per_sec": 5.364312039520762, "perf/tokens_per_sec": 4828.551374573625, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 28804.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
34
+ {"timestamp": 1773763462.8593152, "event": "train_step", "step": 270, "epoch": 2, "metrics": {"train/step_loss": 0.8835187554359436, "train/step_real_loss": 0.8835187554359436, "train/lr": 1.822600463214922e-05, "perf/step_duration_sec": 6.238990655634552, "perf/samples_per_sec": 5.129034769606553, "perf/tokens_per_sec": 4065.8820312690445, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 25367.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
35
+ {"timestamp": 1773763519.993293, "event": "train_step", "step": 280, "epoch": 2, "metrics": {"train/step_loss": 0.9217604398727417, "train/step_real_loss": 0.9217604398727417, "train/lr": 1.488415843473942e-05, "perf/step_duration_sec": 5.145696292165667, "perf/samples_per_sec": 6.2187890973511335, "perf/tokens_per_sec": 4732.109828765628, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 24350.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
36
+ {"timestamp": 1773763577.139687, "event": "train_step", "step": 290, "epoch": 2, "metrics": {"train/step_loss": 0.9289029836654663, "train/step_real_loss": 0.9481655806303024, "train/lr": 1.1826665812616183e-05, "train/step_canary_loss": 0.3125, "perf/step_duration_sec": 5.694677841849625, "perf/samples_per_sec": 5.794884437094274, "perf/tokens_per_sec": 4061.3359776095867, "perf/logical_batch_size": 33.0, "perf/logical_token_count": 23128.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
37
+ {"timestamp": 1773763634.8883243, "event": "train_step", "step": 300, "epoch": 2, "metrics": {"train/step_loss": 0.9466440713766849, "train/step_real_loss": 0.8473204374313354, "train/lr": 9.078285077691178e-06, "train/step_canary_loss": 4.125, "perf/step_duration_sec": 6.2339927861467, "perf/samples_per_sec": 5.293557617412269, "perf/tokens_per_sec": 4104.752905210987, "perf/logical_batch_size": 33.0, "perf/logical_token_count": 25589.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
38
+ {"timestamp": 1773763647.6901762, "event": "eval_step", "step": 300, "epoch": 2, "metrics": {"eval/loss": 0.8076710475560945, "eval/duration_sec": 12.799929299857467}}
39
+ {"timestamp": 1773763704.81611, "event": "train_step", "step": 310, "epoch": 2, "metrics": {"train/step_loss": 0.9043312668800354, "train/step_real_loss": 0.9043312668800354, "train/lr": 6.661271481537157e-06, "perf/step_duration_sec": 5.423097257036716, "perf/samples_per_sec": 5.900687095087322, "perf/tokens_per_sec": 4037.360748341778, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 21895.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
40
+ {"timestamp": 1773763760.6698298, "event": "train_step", "step": 320, "epoch": 2, "metrics": {"train/step_loss": 0.8735850304365158, "train/step_real_loss": 0.8735850304365158, "train/lr": 4.595197001556562e-06, "perf/step_duration_sec": 5.15081740077585, "perf/samples_per_sec": 6.212606176872034, "perf/tokens_per_sec": 4842.532370928723, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 24943.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
41
+ {"timestamp": 1773763818.6010072, "event": "train_step", "step": 330, "epoch": 2, "metrics": {"train/step_loss": 0.7620985209941864, "train/step_real_loss": 0.7620985209941864, "train/lr": 2.8967918551955297e-06, "perf/step_duration_sec": 6.058840225916356, "perf/samples_per_sec": 5.281538843543317, "perf/tokens_per_sec": 4943.025213289962, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 29949.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
42
+ {"timestamp": 1773763875.2762098, "event": "train_step", "step": 340, "epoch": 2, "metrics": {"train/step_loss": 0.9000806212425232, "train/step_real_loss": 0.9000806212425232, "train/lr": 1.5798090255558617e-06, "perf/step_duration_sec": 5.154371100012213, "perf/samples_per_sec": 6.2083228737496565, "perf/tokens_per_sec": 4376.285595724094, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 22557.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
43
+ {"timestamp": 1773763932.3082285, "event": "train_step", "step": 350, "epoch": 2, "metrics": {"train/step_loss": 0.7404757142066956, "train/step_real_loss": 0.7404757142066956, "train/lr": 6.54912895420573e-07, "perf/step_duration_sec": 5.694617530796677, "perf/samples_per_sec": 5.619341391575985, "perf/tokens_per_sec": 4990.501968974935, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 28419.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
44
+ {"timestamp": 1773763945.1132307, "event": "eval_step", "step": 350, "epoch": 2, "metrics": {"eval/loss": 0.807514699605795, "eval/duration_sec": 12.803151289001107}}
45
+ {"timestamp": 1773764000.5064592, "event": "train_step", "step": 360, "epoch": 2, "metrics": {"train/step_loss": 0.8157700151205063, "train/step_real_loss": 0.8157700151205063, "train/lr": 1.295928914885336e-07, "perf/step_duration_sec": 5.695594378747046, "perf/samples_per_sec": 5.618377621729371, "perf/tokens_per_sec": 4667.116060650316, "perf/logical_batch_size": 32.0, "perf/logical_token_count": 26582.0, "perf/gradient_accumulation_steps": 4.0, "system/cuda_memory_allocated_gb": 15.915565013885498, "system/cuda_max_memory_allocated_gb": 94.47624206542969}}
46
+ {"timestamp": 1773764058.7973218, "event": "train_epoch", "step": 368, "epoch": 2, "metrics": {"train/epoch_loss": 0.856036927981092, "train/epoch_real_loss": 0.8280306565727147, "train/epoch_canary_loss": 3.6806401156922846, "perf/epoch_duration_sec": 1096.7485609338619, "perf/epoch_samples_per_sec": 43.38460217306761, "perf/epoch_tokens_per_sec": 34225.847506959835, "perf/epoch_samples": 47582.0, "perf/epoch_tokens": 37537149.0, "system/cuda_epoch_peak_memory_gb": 94.47624206542969, "eval/loss": 0.8075161480750794, "eval/duration_sec": 12.857198356185108}}
47
+ {"timestamp": 1773764072.6744637, "event": "audit_epoch", "step": 368, "epoch": 2, "metrics": {"audit/delta": 1e-05, "audit/num_canaries": 500.0, "audit/num_members": 250.0, "audit/paper_guess_fraction": 0.2, "audit/paper_guess_steps": 20.0, "audit/loss/auc": 0.968584, "audit/loss/empirical_epsilon/0.05": 3.4791953936219215, "audit/loss/empirical_epsilon/0.01": 3.023197554051876, "audit/loss/empirical_epsilon_details/0.05/epsilon": 3.4791953936219215, "audit/loss/empirical_epsilon_details/0.05/num_guesses": 100.0, "audit/loss/empirical_epsilon_details/0.05/correct_guesses": 100.0, "audit/loss/empirical_epsilon_details/0.01/epsilon": 3.023197554051876, "audit/loss/empirical_epsilon_details/0.01/num_guesses": 100.0, "audit/loss/empirical_epsilon_details/0.01/correct_guesses": 100.0, "audit/embedding/auc": 0.883776, "audit/embedding/empirical_epsilon/0.05": 3.4791953936219215, "audit/embedding/empirical_epsilon/0.01": 3.023197554051876, "audit/embedding/empirical_epsilon_details/0.05/epsilon": 3.4791953936219215, "audit/embedding/empirical_epsilon_details/0.05/num_guesses": 100.0, "audit/embedding/empirical_epsilon_details/0.05/correct_guesses": 100.0, "audit/embedding/empirical_epsilon_details/0.01/epsilon": 3.023197554051876, "audit/embedding/empirical_epsilon_details/0.01/num_guesses": 100.0, "audit/embedding/empirical_epsilon_details/0.01/correct_guesses": 100.0, "perf/audit_duration_sec": 7.556974642910063}}
48
+ {"timestamp": 1773764086.367738, "event": "audit_final", "step": 368, "epoch": 2, "metrics": {"audit/delta": 1e-05, "audit/num_canaries": 500.0, "audit/num_members": 250.0, "audit/paper_guess_fraction": 0.2, "audit/paper_guess_steps": 20.0, "audit/loss/auc": 0.968584, "audit/loss/empirical_epsilon/0.05": 3.4791953936219215, "audit/loss/empirical_epsilon/0.01": 3.023197554051876, "audit/loss/empirical_epsilon_details/0.05/epsilon": 3.4791953936219215, "audit/loss/empirical_epsilon_details/0.05/num_guesses": 100.0, "audit/loss/empirical_epsilon_details/0.05/correct_guesses": 100.0, "audit/loss/empirical_epsilon_details/0.01/epsilon": 3.023197554051876, "audit/loss/empirical_epsilon_details/0.01/num_guesses": 100.0, "audit/loss/empirical_epsilon_details/0.01/correct_guesses": 100.0, "audit/embedding/auc": 0.883776, "audit/embedding/empirical_epsilon/0.05": 3.4791953936219215, "audit/embedding/empirical_epsilon/0.01": 3.023197554051876, "audit/embedding/empirical_epsilon_details/0.05/epsilon": 3.4791953936219215, "audit/embedding/empirical_epsilon_details/0.05/num_guesses": 100.0, "audit/embedding/empirical_epsilon_details/0.05/correct_guesses": 100.0, "audit/embedding/empirical_epsilon_details/0.01/epsilon": 3.023197554051876, "audit/embedding/empirical_epsilon_details/0.01/num_guesses": 100.0, "audit/embedding/empirical_epsilon_details/0.01/correct_guesses": 100.0}}
49
+ {"timestamp": 1773764086.9161372, "event": "energy_final", "step": 368, "epoch": null, "metrics": {"energy/codecarbon/duration": 2345.9966679112986, "energy/codecarbon/emissions": 0.09022432714096462, "energy/codecarbon/emissions_rate": 3.8458847096868924e-05, "energy/codecarbon/cpu_power": 72.02285277932866, "energy/codecarbon/gpu_power": 3280.290622412428, "energy/codecarbon/ram_power": 54.0, "energy/codecarbon/cpu_energy": 0.045218986505725985, "energy/codecarbon/gpu_energy": 2.137964725370466, "energy/codecarbon/ram_energy": 0.03390259211605879, "energy/codecarbon/energy_consumed": 2.2170863039922497, "energy/codecarbon/water_consumed": 0.0, "energy/codecarbon/cpu_count": 256.0, "energy/codecarbon/gpu_count": 8.0, "energy/codecarbon/longitude": 16.1885, "energy/codecarbon/latitude": 58.594, "energy/codecarbon/ram_total_size": 1511.49019241333, "energy/codecarbon/cpu_utilization_percent": 3.3142796066695253, "energy/codecarbon/gpu_utilization_percent": 88.58721675929884, "energy/codecarbon/ram_utilization_percent": 5.287772552372644, "energy/codecarbon/ram_used_gb": 79.7947571596665, "energy/codecarbon/pue": 1.0, "energy/codecarbon/wue": 0.0}}
qwen3-4b-instruct/base/pretrain_lm_head.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc44b7d60b8e2cf912e4233ff02bc57bb7e91f7a3ba6aa8ea10b7767ca29954a
3
+ size 779106920
qwen3-4b-instruct/base/resolved_config.yaml ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model:
2
+ name: Qwen/Qwen3-4B-Instruct-2507
3
+ tokenizer_name: Qwen/Qwen3-4B-Instruct-2507
4
+ max_length: 1024
5
+ dtype: bfloat16
6
+ trust_remote_code: true
7
+ use_fast_tokenizer: true
8
+ cache_dir: null
9
+ local_files_only: false
10
+ low_cpu_mem_usage: true
11
+ tie_word_embeddings: true
12
+ gradient_checkpointing: false
13
+ use_chat_template: false
14
+ dataset:
15
+ name: melihcatal/codedp-cpt
16
+ split: train
17
+ mode: cpt
18
+ text_column: text
19
+ validation_ratio: 0.05
20
+ max_samples: -1
21
+ lora:
22
+ enabled: true
23
+ r: 16
24
+ alpha: 32
25
+ dropout: 0.05
26
+ target_modules:
27
+ - q_proj
28
+ - k_proj
29
+ - v_proj
30
+ - o_proj
31
+ modules_to_save:
32
+ - lm_head
33
+ bias: none
34
+ training:
35
+ seed: 42
36
+ epochs: 2
37
+ warmup_steps: null
38
+ warmup_ratio: 0.05
39
+ mixed_precision: false
40
+ mixed_precision_dtype: bfloat16
41
+ batch_size: 8
42
+ eval_batch_size: 8
43
+ eval_every_steps: 50
44
+ eval_every_epochs: 1
45
+ learning_rate: 0.0001
46
+ optimizer: adamw
47
+ lr_scheduler: cosine
48
+ adam_beta1: 0.9
49
+ adam_beta2: 0.999
50
+ adam_epsilon: 1.0e-08
51
+ sgd_momentum: 0.9
52
+ weight_decay: 0.01
53
+ max_grad_norm: 1.0
54
+ log_every: 10
55
+ gradient_accumulation_steps: 4
56
+ num_workers: 4
57
+ output_dir: runs/cpt/qwen3-4b-instruct/base
58
+ distributed:
59
+ strategy: dpddp
60
+ backend: nccl
61
+ devices: null
62
+ dp:
63
+ module_validator: auto
64
+ target_delta: 1.0e-05
65
+ noise_multiplier: null
66
+ max_grad_norm: 1.0
67
+ grad_sample_mode: ghost
68
+ secure_mode: false
69
+ enabled: false
70
+ target_epsilon: 8.0
71
+ audit:
72
+ enabled: true
73
+ run_every_epoch: true
74
+ epoch_device: cuda
75
+ q_canary: auto
76
+ num_canaries: 500
77
+ prefix_length: 49
78
+ num_digits: 12
79
+ batch_size: 32
80
+ delta: 1.0e-05
81
+ p_values:
82
+ - 0.05
83
+ - 0.01
84
+ paper_guess_fraction: 0.2
85
+ paper_guess_steps: 20
86
+ enable_holdout_empirical_epsilon: false
87
+ holdout_seed: 42
88
+ tie_seed: 42
89
+ tracking:
90
+ enabled: true
91
+ tensorboard: true
92
+ wandb: false
93
+ wandb_project: codedp-finetune-h200-audit
94
+ wandb_run_name: qwen3-4b-instruct-cpt-base
95
+ wandb_mode: online
96
+ codecarbon: true
97
+ codecarbon_output_file: codecarbon.csv
98
+ codecarbon_measure_power_secs: 15
99
+ codecarbon_country_iso_code: null
100
+ codecarbon_project_name: codedp-qwen3-4b-instruct-cpt-base
qwen3-4b-instruct/base/scalars.csv ADDED
@@ -0,0 +1,537 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ timestamp,event,step,epoch,key,value
2
+ 1773761894.8111923,train_step,10,1,train/step_loss,1.8352766107110416
3
+ 1773761894.8111923,train_step,10,1,train/step_real_loss,1.028106451034546
4
+ 1773761894.8111923,train_step,10,1,train/lr,5.2631578947368424e-05
5
+ 1773761894.8111923,train_step,10,1,train/step_canary_loss,14.75
6
+ 1773761894.8111923,train_step,10,1,perf/step_duration_sec,6.234770041890442
7
+ 1773761894.8111923,train_step,10,1,perf/samples_per_sec,5.453288536956348
8
+ 1773761894.8111923,train_step,10,1,perf/tokens_per_sec,3980.098677781523
9
+ 1773761894.8111923,train_step,10,1,perf/logical_batch_size,34.0
10
+ 1773761894.8111923,train_step,10,1,perf/logical_token_count,24815.0
11
+ 1773761894.8111923,train_step,10,1,perf/gradient_accumulation_steps,4.0
12
+ 1773761894.8111923,train_step,10,1,system/cuda_memory_allocated_gb,15.915565013885498
13
+ 1773761894.8111923,train_step,10,1,system/cuda_max_memory_allocated_gb,94.4762544631958
14
+ 1773761950.7987902,train_step,20,1,train/step_loss,1.0323970019817352
15
+ 1773761950.7987902,train_step,20,1,train/step_real_loss,1.0323970019817352
16
+ 1773761950.7987902,train_step,20,1,train/lr,9.999797424944042e-05
17
+ 1773761950.7987902,train_step,20,1,perf/step_duration_sec,5.150427320972085
18
+ 1773761950.7987902,train_step,20,1,perf/samples_per_sec,6.213076703305535
19
+ 1773761950.7987902,train_step,20,1,perf/tokens_per_sec,4927.9406189561805
20
+ 1773761950.7987902,train_step,20,1,perf/logical_batch_size,32.0
21
+ 1773761950.7987902,train_step,20,1,perf/logical_token_count,25381.0
22
+ 1773761950.7987902,train_step,20,1,perf/gradient_accumulation_steps,4.0
23
+ 1773761950.7987902,train_step,20,1,system/cuda_memory_allocated_gb,15.915565013885498
24
+ 1773761950.7987902,train_step,20,1,system/cuda_max_memory_allocated_gb,94.4762544631958
25
+ 1773762007.3004794,train_step,30,1,train/step_loss,0.8551503717899323
26
+ 1773762007.3004794,train_step,30,1,train/step_real_loss,0.8551503717899323
27
+ 1773762007.3004794,train_step,30,1,train/lr,9.975508273693644e-05
28
+ 1773762007.3004794,train_step,30,1,perf/step_duration_sec,5.69609066285193
29
+ 1773762007.3004794,train_step,30,1,perf/samples_per_sec,5.617888108539722
30
+ 1773762007.3004794,train_step,30,1,perf/tokens_per_sec,4432.689276641233
31
+ 1773762007.3004794,train_step,30,1,perf/logical_batch_size,32.0
32
+ 1773762007.3004794,train_step,30,1,perf/logical_token_count,25249.0
33
+ 1773762007.3004794,train_step,30,1,perf/gradient_accumulation_steps,4.0
34
+ 1773762007.3004794,train_step,30,1,system/cuda_memory_allocated_gb,15.915565013885498
35
+ 1773762007.3004794,train_step,30,1,system/cuda_max_memory_allocated_gb,94.4762544631958
36
+ 1773762065.1568909,train_step,40,1,train/step_loss,0.8950656801462173
37
+ 1773762065.1568909,train_step,40,1,train/step_real_loss,0.8950656801462173
38
+ 1773762065.1568909,train_step,40,1,train/lr,9.910929512300672e-05
39
+ 1773762065.1568909,train_step,40,1,perf/step_duration_sec,6.2338299779221416
40
+ 1773762065.1568909,train_step,40,1,perf/samples_per_sec,5.133280842328368
41
+ 1773762065.1568909,train_step,40,1,perf/tokens_per_sec,4016.4714290693023
42
+ 1773762065.1568909,train_step,40,1,perf/logical_batch_size,32.0
43
+ 1773762065.1568909,train_step,40,1,perf/logical_token_count,25038.0
44
+ 1773762065.1568909,train_step,40,1,perf/gradient_accumulation_steps,4.0
45
+ 1773762065.1568909,train_step,40,1,system/cuda_memory_allocated_gb,15.915565013885498
46
+ 1773762065.1568909,train_step,40,1,system/cuda_max_memory_allocated_gb,94.4762544631958
47
+ 1773762121.2732794,train_step,50,1,train/step_loss,0.8450518101453781
48
+ 1773762121.2732794,train_step,50,1,train/step_real_loss,0.8450518101453781
49
+ 1773762121.2732794,train_step,50,1,train/lr,9.806584072891234e-05
50
+ 1773762121.2732794,train_step,50,1,perf/step_duration_sec,5.423216213937849
51
+ 1773762121.2732794,train_step,50,1,perf/samples_per_sec,5.900557664980961
52
+ 1773762121.2732794,train_step,50,1,perf/tokens_per_sec,5482.724425329497
53
+ 1773762121.2732794,train_step,50,1,perf/logical_batch_size,32.0
54
+ 1773762121.2732794,train_step,50,1,perf/logical_token_count,29734.0
55
+ 1773762121.2732794,train_step,50,1,perf/gradient_accumulation_steps,4.0
56
+ 1773762121.2732794,train_step,50,1,system/cuda_memory_allocated_gb,15.915565013885498
57
+ 1773762121.2732794,train_step,50,1,system/cuda_max_memory_allocated_gb,94.4762544631958
58
+ 1773762134.073646,eval_step,50,1,eval/loss,0.8533683589253671
59
+ 1773762134.073646,eval_step,50,1,eval/duration_sec,12.798461285419762
60
+ 1773762190.2760499,train_step,60,1,train/step_loss,1.114606170943289
61
+ 1773762190.2760499,train_step,60,1,train/step_real_loss,0.8427969664335251
62
+ 1773762190.2760499,train_step,60,1,train/lr,9.663316901718597e-05
63
+ 1773762190.2760499,train_step,60,1,train/step_canary_loss,9.8125
64
+ 1773762190.2760499,train_step,60,1,perf/step_duration_sec,5.968041606713086
65
+ 1773762190.2760499,train_step,60,1,perf/samples_per_sec,5.5294520673046765
66
+ 1773762190.2760499,train_step,60,1,perf/tokens_per_sec,4259.353683360148
67
+ 1773762190.2760499,train_step,60,1,perf/logical_batch_size,33.0
68
+ 1773762190.2760499,train_step,60,1,perf/logical_token_count,25420.0
69
+ 1773762190.2760499,train_step,60,1,perf/gradient_accumulation_steps,4.0
70
+ 1773762190.2760499,train_step,60,1,system/cuda_memory_allocated_gb,15.915565013885498
71
+ 1773762190.2760499,train_step,60,1,system/cuda_max_memory_allocated_gb,94.4762544631958
72
+ 1773762249.207255,train_step,70,1,train/step_loss,1.1624658794114084
73
+ 1773762249.207255,train_step,70,1,train/step_real_loss,0.8745741844177246
74
+ 1773762249.207255,train_step,70,1,train/lr,9.48228811713756e-05
75
+ 1773762249.207255,train_step,70,1,train/step_canary_loss,10.375
76
+ 1773762249.207255,train_step,70,1,perf/step_duration_sec,6.334186799824238
77
+ 1773762249.207255,train_step,70,1,perf/samples_per_sec,5.209824250986045
78
+ 1773762249.207255,train_step,70,1,perf/tokens_per_sec,3829.378697936894
79
+ 1773762249.207255,train_step,70,1,perf/logical_batch_size,33.0
80
+ 1773762249.207255,train_step,70,1,perf/logical_token_count,24256.0
81
+ 1773762249.207255,train_step,70,1,perf/gradient_accumulation_steps,4.0
82
+ 1773762249.207255,train_step,70,1,system/cuda_memory_allocated_gb,16.205660820007324
83
+ 1773762249.207255,train_step,70,1,system/cuda_max_memory_allocated_gb,94.4762544631958
84
+ 1773762305.719841,train_step,80,1,train/step_loss,1.157600255573497
85
+ 1773762305.719841,train_step,80,1,train/step_real_loss,0.8803409039974213
86
+ 1773762305.719841,train_step,80,1,train/lr,9.26496361544538e-05
87
+ 1773762305.719841,train_step,80,1,train/step_canary_loss,5.59375
88
+ 1773762305.719841,train_step,80,1,perf/step_duration_sec,5.699764240998775
89
+ 1773762305.719841,train_step,80,1,perf/samples_per_sec,5.9651590070052
90
+ 1773762305.719841,train_step,80,1,perf/tokens_per_sec,4690.720329743854
91
+ 1773762305.719841,train_step,80,1,perf/logical_batch_size,34.0
92
+ 1773762305.719841,train_step,80,1,perf/logical_token_count,26736.0
93
+ 1773762305.719841,train_step,80,1,perf/gradient_accumulation_steps,4.0
94
+ 1773762305.719841,train_step,80,1,system/cuda_memory_allocated_gb,16.205660820007324
95
+ 1773762305.719841,train_step,80,1,system/cuda_max_memory_allocated_gb,94.4762544631958
96
+ 1773762363.2025864,train_step,90,1,train/step_loss,0.8746908158063889
97
+ 1773762363.2025864,train_step,90,1,train/step_real_loss,0.8746908158063889
98
+ 1773762363.2025864,train_step,90,1,train/lr,9.013103200659241e-05
99
+ 1773762363.2025864,train_step,90,1,perf/step_duration_sec,5.422750173136592
100
+ 1773762363.2025864,train_step,90,1,perf/samples_per_sec,5.901064769408466
101
+ 1773762363.2025864,train_step,90,1,perf/tokens_per_sec,4211.147346069116
102
+ 1773762363.2025864,train_step,90,1,perf/logical_batch_size,32.0
103
+ 1773762363.2025864,train_step,90,1,perf/logical_token_count,22836.0
104
+ 1773762363.2025864,train_step,90,1,perf/gradient_accumulation_steps,4.0
105
+ 1773762363.2025864,train_step,90,1,system/cuda_memory_allocated_gb,15.915565013885498
106
+ 1773762363.2025864,train_step,90,1,system/cuda_max_memory_allocated_gb,94.4762544631958
107
+ 1773762419.340239,train_step,100,1,train/step_loss,1.227062124194521
108
+ 1773762419.340239,train_step,100,1,train/step_real_loss,0.9060328304767609
109
+ 1773762419.340239,train_step,100,1,train/lr,8.728746334350483e-05
110
+ 1773762419.340239,train_step,100,1,train/step_canary_loss,11.5
111
+ 1773762419.340239,train_step,100,1,perf/step_duration_sec,5.688522285781801
112
+ 1773762419.340239,train_step,100,1,perf/samples_per_sec,5.801155087760134
113
+ 1773762419.340239,train_step,100,1,perf/tokens_per_sec,4329.068036096396
114
+ 1773762419.340239,train_step,100,1,perf/logical_batch_size,33.0
115
+ 1773762419.340239,train_step,100,1,perf/logical_token_count,24626.0
116
+ 1773762419.340239,train_step,100,1,perf/gradient_accumulation_steps,4.0
117
+ 1773762419.340239,train_step,100,1,system/cuda_memory_allocated_gb,16.205660820007324
118
+ 1773762419.340239,train_step,100,1,system/cuda_max_memory_allocated_gb,94.4762544631958
119
+ 1773762432.1276581,eval_step,100,1,eval/loss,0.829470864473245
120
+ 1773762432.1276581,eval_step,100,1,eval/duration_sec,12.785310188308358
121
+ 1773762489.3303852,train_step,110,1,train/step_loss,1.011215921604272
122
+ 1773762489.3303852,train_step,110,1,train/step_real_loss,0.9119570553302765
123
+ 1773762489.3303852,train_step,110,1,train/lr,8.414195620927492e-05
124
+ 1773762489.3303852,train_step,110,1,train/step_canary_loss,4.1875
125
+ 1773762489.3303852,train_step,110,1,perf/step_duration_sec,5.6974792359396815
126
+ 1773762489.3303852,train_step,110,1,perf/samples_per_sec,5.792035149831895
127
+ 1773762489.3303852,train_step,110,1,perf/tokens_per_sec,4259.603062159705
128
+ 1773762489.3303852,train_step,110,1,perf/logical_batch_size,33.0
129
+ 1773762489.3303852,train_step,110,1,perf/logical_token_count,24269.0
130
+ 1773762489.3303852,train_step,110,1,perf/gradient_accumulation_steps,4.0
131
+ 1773762489.3303852,train_step,110,1,system/cuda_memory_allocated_gb,15.915565013885498
132
+ 1773762489.3303852,train_step,110,1,system/cuda_max_memory_allocated_gb,94.4762544631958
133
+ 1773762544.9202547,train_step,120,1,train/step_loss,0.7180200964212418
134
+ 1773762544.9202547,train_step,120,1,train/step_real_loss,0.7180200964212418
135
+ 1773762544.9202547,train_step,120,1,train/lr,8.071998162096612e-05
136
+ 1773762544.9202547,train_step,120,1,perf/step_duration_sec,5.694211829919368
137
+ 1773762544.9202547,train_step,120,1,perf/samples_per_sec,5.619741758088605
138
+ 1773762544.9202547,train_step,120,1,perf/tokens_per_sec,4598.00245969612
139
+ 1773762544.9202547,train_step,120,1,perf/logical_batch_size,32.0
140
+ 1773762544.9202547,train_step,120,1,perf/logical_token_count,26182.0
141
+ 1773762544.9202547,train_step,120,1,perf/gradient_accumulation_steps,4.0
142
+ 1773762544.9202547,train_step,120,1,system/cuda_memory_allocated_gb,15.915565013885498
143
+ 1773762544.9202547,train_step,120,1,system/cuda_max_memory_allocated_gb,94.4762544631958
144
+ 1773762601.86658,train_step,130,1,train/step_loss,0.8463245183229446
145
+ 1773762601.86658,train_step,130,1,train/step_real_loss,0.8463245183229446
146
+ 1773762601.86658,train_step,130,1,train/lr,7.704924931484997e-05
147
+ 1773762601.86658,train_step,130,1,perf/step_duration_sec,5.1536158989183605
148
+ 1773762601.86658,train_step,130,1,perf/samples_per_sec,6.209232629602092
149
+ 1773762601.86658,train_step,130,1,perf/tokens_per_sec,4267.683201733388
150
+ 1773762601.86658,train_step,130,1,perf/logical_batch_size,32.0
151
+ 1773762601.86658,train_step,130,1,perf/logical_token_count,21994.0
152
+ 1773762601.86658,train_step,130,1,perf/gradient_accumulation_steps,4.0
153
+ 1773762601.86658,train_step,130,1,system/cuda_memory_allocated_gb,15.915565013885498
154
+ 1773762601.86658,train_step,130,1,system/cuda_max_memory_allocated_gb,101.70386934280396
155
+ 1773762659.5638738,train_step,140,1,train/step_loss,0.9742496162652969
156
+ 1773762659.5638738,train_step,140,1,train/step_real_loss,0.9742496162652969
157
+ 1773762659.5638738,train_step,140,1,train/lr,7.315948336441117e-05
158
+ 1773762659.5638738,train_step,140,1,perf/step_duration_sec,5.969049285165966
159
+ 1773762659.5638738,train_step,140,1,perf/samples_per_sec,5.360987733762741
160
+ 1773762659.5638738,train_step,140,1,perf/tokens_per_sec,3927.091045847888
161
+ 1773762659.5638738,train_step,140,1,perf/logical_batch_size,32.0
162
+ 1773762659.5638738,train_step,140,1,perf/logical_token_count,23441.0
163
+ 1773762659.5638738,train_step,140,1,perf/gradient_accumulation_steps,4.0
164
+ 1773762659.5638738,train_step,140,1,system/cuda_memory_allocated_gb,15.915565013885498
165
+ 1773762659.5638738,train_step,140,1,system/cuda_max_memory_allocated_gb,101.70386934280396
166
+ 1773762717.6212678,train_step,150,1,train/step_loss,0.914526179432869
167
+ 1773762717.6212678,train_step,150,1,train/step_real_loss,0.914526179432869
168
+ 1773762717.6212678,train_step,150,1,train/lr,6.908218148708247e-05
169
+ 1773762717.6212678,train_step,150,1,perf/step_duration_sec,5.703813333064318
170
+ 1773762717.6212678,train_step,150,1,perf/samples_per_sec,5.610281776666824
171
+ 1773762717.6212678,train_step,150,1,perf/tokens_per_sec,4645.31331108013
172
+ 1773762717.6212678,train_step,150,1,perf/logical_batch_size,32.0
173
+ 1773762717.6212678,train_step,150,1,perf/logical_token_count,26496.0
174
+ 1773762717.6212678,train_step,150,1,perf/gradient_accumulation_steps,4.0
175
+ 1773762717.6212678,train_step,150,1,system/cuda_memory_allocated_gb,15.915565013885498
176
+ 1773762717.6212678,train_step,150,1,system/cuda_max_memory_allocated_gb,101.70386934280396
177
+ 1773762730.4167268,eval_step,150,1,eval/loss,0.8174186625923866
178
+ 1773762730.4167268,eval_step,150,1,eval/duration_sec,12.793565314263105
179
+ 1773762785.9700294,train_step,160,1,train/step_loss,0.9113822728395462
180
+ 1773762785.9700294,train_step,160,1,train/step_real_loss,0.9113822728395462
181
+ 1773762785.9700294,train_step,160,1,train/lr,6.485035998874356e-05
182
+ 1773762785.9700294,train_step,160,1,perf/step_duration_sec,5.425368802621961
183
+ 1773762785.9700294,train_step,160,1,perf/samples_per_sec,5.898216538668322
184
+ 1773762785.9700294,train_step,160,1,perf/tokens_per_sec,4679.128907832313
185
+ 1773762785.9700294,train_step,160,1,perf/logical_batch_size,32.0
186
+ 1773762785.9700294,train_step,160,1,perf/logical_token_count,25386.0
187
+ 1773762785.9700294,train_step,160,1,perf/gradient_accumulation_steps,4.0
188
+ 1773762785.9700294,train_step,160,1,system/cuda_memory_allocated_gb,15.915565013885498
189
+ 1773762785.9700294,train_step,160,1,system/cuda_max_memory_allocated_gb,101.70386934280396
190
+ 1773762841.0786932,train_step,170,1,train/step_loss,0.8910564256436897
191
+ 1773762841.0786932,train_step,170,1,train/step_real_loss,0.8441949039697647
192
+ 1773762841.0786932,train_step,170,1,train/lr,6.049828641131825e-05
193
+ 1773762841.0786932,train_step,170,1,train/step_canary_loss,2.390625
194
+ 1773762841.0786932,train_step,170,1,perf/step_duration_sec,5.690398690290749
195
+ 1773762841.0786932,train_step,170,1,perf/samples_per_sec,5.799242161415912
196
+ 1773762841.0786932,train_step,170,1,perf/tokens_per_sec,4355.582332445254
197
+ 1773762841.0786932,train_step,170,1,perf/logical_batch_size,33.0
198
+ 1773762841.0786932,train_step,170,1,perf/logical_token_count,24785.0
199
+ 1773762841.0786932,train_step,170,1,perf/gradient_accumulation_steps,4.0
200
+ 1773762841.0786932,train_step,170,1,system/cuda_memory_allocated_gb,16.205660820007324
201
+ 1773762841.0786932,train_step,170,1,system/cuda_max_memory_allocated_gb,101.70386934280396
202
+ 1773762898.8441253,train_step,180,1,train/step_loss,1.0836328773787527
203
+ 1773762898.8441253,train_step,180,1,train/step_real_loss,0.8538245260715485
204
+ 1773762898.8441253,train_step,180,1,train/lr,5.6061202048379124e-05
205
+ 1773762898.8441253,train_step,180,1,train/step_canary_loss,8.4375
206
+ 1773762898.8441253,train_step,180,1,perf/step_duration_sec,5.692985306028277
207
+ 1773762898.8441253,train_step,180,1,perf/samples_per_sec,5.7966072677293665
208
+ 1773762898.8441253,train_step,180,1,perf/tokens_per_sec,4476.7373583439585
209
+ 1773762898.8441253,train_step,180,1,perf/logical_batch_size,33.0
210
+ 1773762898.8441253,train_step,180,1,perf/logical_token_count,25486.0
211
+ 1773762898.8441253,train_step,180,1,perf/gradient_accumulation_steps,4.0
212
+ 1773762898.8441253,train_step,180,1,system/cuda_memory_allocated_gb,15.915565013885498
213
+ 1773762898.8441253,train_step,180,1,system/cuda_max_memory_allocated_gb,101.70386934280396
214
+ 1773762934.6675656,train_epoch,184,1,train/epoch_loss,0.9805822407983543
215
+ 1773762934.6675656,train_epoch,184,1,train/epoch_real_loss,0.9071368900552877
216
+ 1773762934.6675656,train_epoch,184,1,train/epoch_canary_loss,8.42140839386602
217
+ 1773762934.6675656,train_epoch,184,1,perf/epoch_duration_sec,1085.6805565529503
218
+ 1773762934.6675656,train_epoch,184,1,perf/epoch_samples_per_sec,43.826887856473604
219
+ 1773762934.6675656,train_epoch,184,1,perf/epoch_tokens_per_sec,34574.90951038253
220
+ 1773762934.6675656,train_epoch,184,1,perf/epoch_samples,47582.0
221
+ 1773762934.6675656,train_epoch,184,1,perf/epoch_tokens,37537307.0
222
+ 1773762934.6675656,train_epoch,184,1,system/cuda_epoch_peak_memory_gb,101.70386934280396
223
+ 1773762934.6675656,train_epoch,184,1,eval/loss,0.8121764394335258
224
+ 1773762934.6675656,train_epoch,184,1,eval/duration_sec,12.826699289958924
225
+ 1773762949.1512606,audit_epoch,184,1,audit/delta,1e-05
226
+ 1773762949.1512606,audit_epoch,184,1,audit/num_canaries,500.0
227
+ 1773762949.1512606,audit_epoch,184,1,audit/num_members,250.0
228
+ 1773762949.1512606,audit_epoch,184,1,audit/paper_guess_fraction,0.2
229
+ 1773762949.1512606,audit_epoch,184,1,audit/paper_guess_steps,20.0
230
+ 1773762949.1512606,audit_epoch,184,1,audit/loss/auc,0.907944
231
+ 1773762949.1512606,audit_epoch,184,1,audit/loss/empirical_epsilon/0.05,3.4791953936219215
232
+ 1773762949.1512606,audit_epoch,184,1,audit/loss/empirical_epsilon/0.01,3.023197554051876
233
+ 1773762949.1512606,audit_epoch,184,1,audit/loss/empirical_epsilon_details/0.05/epsilon,3.4791953936219215
234
+ 1773762949.1512606,audit_epoch,184,1,audit/loss/empirical_epsilon_details/0.05/num_guesses,100.0
235
+ 1773762949.1512606,audit_epoch,184,1,audit/loss/empirical_epsilon_details/0.05/correct_guesses,100.0
236
+ 1773762949.1512606,audit_epoch,184,1,audit/loss/empirical_epsilon_details/0.01/epsilon,3.023197554051876
237
+ 1773762949.1512606,audit_epoch,184,1,audit/loss/empirical_epsilon_details/0.01/num_guesses,100.0
238
+ 1773762949.1512606,audit_epoch,184,1,audit/loss/empirical_epsilon_details/0.01/correct_guesses,100.0
239
+ 1773762949.1512606,audit_epoch,184,1,audit/embedding/auc,0.876048
240
+ 1773762949.1512606,audit_epoch,184,1,audit/embedding/empirical_epsilon/0.05,3.4791953936219215
241
+ 1773762949.1512606,audit_epoch,184,1,audit/embedding/empirical_epsilon/0.01,3.023197554051876
242
+ 1773762949.1512606,audit_epoch,184,1,audit/embedding/empirical_epsilon_details/0.05/epsilon,3.4791953936219215
243
+ 1773762949.1512606,audit_epoch,184,1,audit/embedding/empirical_epsilon_details/0.05/num_guesses,100.0
244
+ 1773762949.1512606,audit_epoch,184,1,audit/embedding/empirical_epsilon_details/0.05/correct_guesses,100.0
245
+ 1773762949.1512606,audit_epoch,184,1,audit/embedding/empirical_epsilon_details/0.01/epsilon,3.023197554051876
246
+ 1773762949.1512606,audit_epoch,184,1,audit/embedding/empirical_epsilon_details/0.01/num_guesses,100.0
247
+ 1773762949.1512606,audit_epoch,184,1,audit/embedding/empirical_epsilon_details/0.01/correct_guesses,100.0
248
+ 1773762949.1512606,audit_epoch,184,1,perf/audit_duration_sec,8.130579099990427
249
+ 1773762984.332577,train_step,190,2,train/step_loss,0.8655764758586884
250
+ 1773762984.332577,train_step,190,2,train/step_real_loss,0.8655764758586884
251
+ 1773762984.332577,train_step,190,2,train/lr,5.157503657571385e-05
252
+ 1773762984.332577,train_step,190,2,perf/step_duration_sec,5.690848938189447
253
+ 1773762984.332577,train_step,190,2,perf/samples_per_sec,5.623062630472995
254
+ 1773762984.332577,train_step,190,2,perf/tokens_per_sec,4602.301042334944
255
+ 1773762984.332577,train_step,190,2,perf/logical_batch_size,32.0
256
+ 1773762984.332577,train_step,190,2,perf/logical_token_count,26191.0
257
+ 1773762984.332577,train_step,190,2,perf/gradient_accumulation_steps,4.0
258
+ 1773762984.332577,train_step,190,2,system/cuda_memory_allocated_gb,15.915565013885498
259
+ 1773762984.332577,train_step,190,2,system/cuda_max_memory_allocated_gb,87.30217599868774
260
+ 1773763040.7157884,train_step,200,2,train/step_loss,0.8589679941986547
261
+ 1773763040.7157884,train_step,200,2,train/step_real_loss,0.8308791071176529
262
+ 1773763040.7157884,train_step,200,2,train/lr,4.7076117107656534e-05
263
+ 1773763040.7157884,train_step,200,2,train/step_canary_loss,1.7578125
264
+ 1773763040.7157884,train_step,200,2,perf/step_duration_sec,5.69332688068971
265
+ 1773763040.7157884,train_step,200,2,perf/samples_per_sec,5.796259496697344
266
+ 1773763040.7157884,train_step,200,2,perf/tokens_per_sec,5059.0806752537455
267
+ 1773763040.7157884,train_step,200,2,perf/logical_batch_size,33.0
268
+ 1773763040.7157884,train_step,200,2,perf/logical_token_count,28803.0
269
+ 1773763040.7157884,train_step,200,2,perf/gradient_accumulation_steps,4.0
270
+ 1773763040.7157884,train_step,200,2,system/cuda_memory_allocated_gb,15.915565013885498
271
+ 1773763040.7157884,train_step,200,2,system/cuda_max_memory_allocated_gb,87.30217599868774
272
+ 1773763053.5208263,eval_step,200,2,eval/loss,0.8110936123591204
273
+ 1773763053.5208263,eval_step,200,2,eval/duration_sec,12.803085402119905
274
+ 1773763110.1544352,train_step,210,2,train/step_loss,0.7684839069843292
275
+ 1773763110.1544352,train_step,210,2,train/step_real_loss,0.7684839069843292
276
+ 1773763110.1544352,train_step,210,2,train/lr,4.2600874035126046e-05
277
+ 1773763110.1544352,train_step,210,2,perf/step_duration_sec,5.4159107422456145
278
+ 1773763110.1544352,train_step,210,2,perf/samples_per_sec,5.908516872405425
279
+ 1773763110.1544352,train_step,210,2,perf/tokens_per_sec,4630.061534138701
280
+ 1773763110.1544352,train_step,210,2,perf/logical_batch_size,32.0
281
+ 1773763110.1544352,train_step,210,2,perf/logical_token_count,25076.0
282
+ 1773763110.1544352,train_step,210,2,perf/gradient_accumulation_steps,4.0
283
+ 1773763110.1544352,train_step,210,2,system/cuda_memory_allocated_gb,15.915565013885498
284
+ 1773763110.1544352,train_step,210,2,system/cuda_max_memory_allocated_gb,87.30229806900024
285
+ 1773763168.1105735,train_step,220,2,train/step_loss,0.8040641099214554
286
+ 1773763168.1105735,train_step,220,2,train/step_real_loss,0.8040641099214554
287
+ 1773763168.1105735,train_step,220,2,train/lr,3.818554602737332e-05
288
+ 1773763168.1105735,train_step,220,2,perf/step_duration_sec,5.967140641994774
289
+ 1773763168.1105735,train_step,220,2,perf/samples_per_sec,5.36270249351834
290
+ 1773763168.1105735,train_step,220,2,perf/tokens_per_sec,4430.932935269528
291
+ 1773763168.1105735,train_step,220,2,perf/logical_batch_size,32.0
292
+ 1773763168.1105735,train_step,220,2,perf/logical_token_count,26440.0
293
+ 1773763168.1105735,train_step,220,2,perf/gradient_accumulation_steps,4.0
294
+ 1773763168.1105735,train_step,220,2,system/cuda_memory_allocated_gb,15.915565013885498
295
+ 1773763168.1105735,train_step,220,2,system/cuda_max_memory_allocated_gb,87.30229806900024
296
+ 1773763224.221138,train_step,230,2,train/step_loss,0.806927278637886
297
+ 1773763224.221138,train_step,230,2,train/step_real_loss,0.806927278637886
298
+ 1773763224.221138,train_step,230,2,train/lr,3.386588658621128e-05
299
+ 1773763224.221138,train_step,230,2,perf/step_duration_sec,5.424597659613937
300
+ 1773763224.221138,train_step,230,2,perf/samples_per_sec,5.899055009782496
301
+ 1773763224.221138,train_step,230,2,perf/tokens_per_sec,4482.175734620363
302
+ 1773763224.221138,train_step,230,2,perf/logical_batch_size,32.0
303
+ 1773763224.221138,train_step,230,2,perf/logical_token_count,24314.0
304
+ 1773763224.221138,train_step,230,2,perf/gradient_accumulation_steps,4.0
305
+ 1773763224.221138,train_step,230,2,system/cuda_memory_allocated_gb,15.915565013885498
306
+ 1773763224.221138,train_step,230,2,system/cuda_max_memory_allocated_gb,87.30229806900024
307
+ 1773763282.06351,train_step,240,2,train/step_loss,0.9124463796615601
308
+ 1773763282.06351,train_step,240,2,train/step_real_loss,0.9124463796615601
309
+ 1773763282.06351,train_step,240,2,train/lr,2.967687452893051e-05
310
+ 1773763282.06351,train_step,240,2,perf/step_duration_sec,5.692487298045307
311
+ 1773763282.06351,train_step,240,2,perf/samples_per_sec,5.621444251792743
312
+ 1773763282.06351,train_step,240,2,perf/tokens_per_sec,4766.282044988772
313
+ 1773763282.06351,train_step,240,2,perf/logical_batch_size,32.0
314
+ 1773763282.06351,train_step,240,2,perf/logical_token_count,27132.0
315
+ 1773763282.06351,train_step,240,2,perf/gradient_accumulation_steps,4.0
316
+ 1773763282.06351,train_step,240,2,system/cuda_memory_allocated_gb,15.915565013885498
317
+ 1773763282.06351,train_step,240,2,system/cuda_max_memory_allocated_gb,94.47624206542969
318
+ 1773763337.7170568,train_step,250,2,train/step_loss,0.7962393760681152
319
+ 1773763337.7170568,train_step,250,2,train/step_real_loss,0.7962393760681152
320
+ 1773763337.7170568,train_step,250,2,train/lr,2.5652430744289756e-05
321
+ 1773763337.7170568,train_step,250,2,perf/step_duration_sec,5.419141778722405
322
+ 1773763337.7170568,train_step,250,2,perf/samples_per_sec,5.904994057480479
323
+ 1773763337.7170568,train_step,250,2,perf/tokens_per_sec,4963.885629569528
324
+ 1773763337.7170568,train_step,250,2,perf/logical_batch_size,32.0
325
+ 1773763337.7170568,train_step,250,2,perf/logical_token_count,26900.0
326
+ 1773763337.7170568,train_step,250,2,perf/gradient_accumulation_steps,4.0
327
+ 1773763337.7170568,train_step,250,2,system/cuda_memory_allocated_gb,15.915565013885498
328
+ 1773763337.7170568,train_step,250,2,system/cuda_max_memory_allocated_gb,94.47624206542969
329
+ 1773763350.5270941,eval_step,250,2,eval/loss,0.8086002505360506
330
+ 1773763350.5270941,eval_step,250,2,eval/duration_sec,12.808165564201772
331
+ 1773763406.908792,train_step,260,2,train/step_loss,0.7549401223659515
332
+ 1773763406.908792,train_step,260,2,train/step_real_loss,0.7549401223659515
333
+ 1773763406.908792,train_step,260,2,train/lr,2.1825143515174878e-05
334
+ 1773763406.908792,train_step,260,2,perf/step_duration_sec,5.96535021904856
335
+ 1773763406.908792,train_step,260,2,perf/samples_per_sec,5.364312039520762
336
+ 1773763406.908792,train_step,260,2,perf/tokens_per_sec,4828.551374573625
337
+ 1773763406.908792,train_step,260,2,perf/logical_batch_size,32.0
338
+ 1773763406.908792,train_step,260,2,perf/logical_token_count,28804.0
339
+ 1773763406.908792,train_step,260,2,perf/gradient_accumulation_steps,4.0
340
+ 1773763406.908792,train_step,260,2,system/cuda_memory_allocated_gb,15.915565013885498
341
+ 1773763406.908792,train_step,260,2,system/cuda_max_memory_allocated_gb,94.47624206542969
342
+ 1773763462.8593152,train_step,270,2,train/step_loss,0.8835187554359436
343
+ 1773763462.8593152,train_step,270,2,train/step_real_loss,0.8835187554359436
344
+ 1773763462.8593152,train_step,270,2,train/lr,1.822600463214922e-05
345
+ 1773763462.8593152,train_step,270,2,perf/step_duration_sec,6.238990655634552
346
+ 1773763462.8593152,train_step,270,2,perf/samples_per_sec,5.129034769606553
347
+ 1773763462.8593152,train_step,270,2,perf/tokens_per_sec,4065.8820312690445
348
+ 1773763462.8593152,train_step,270,2,perf/logical_batch_size,32.0
349
+ 1773763462.8593152,train_step,270,2,perf/logical_token_count,25367.0
350
+ 1773763462.8593152,train_step,270,2,perf/gradient_accumulation_steps,4.0
351
+ 1773763462.8593152,train_step,270,2,system/cuda_memory_allocated_gb,15.915565013885498
352
+ 1773763462.8593152,train_step,270,2,system/cuda_max_memory_allocated_gb,94.47624206542969
353
+ 1773763519.993293,train_step,280,2,train/step_loss,0.9217604398727417
354
+ 1773763519.993293,train_step,280,2,train/step_real_loss,0.9217604398727417
355
+ 1773763519.993293,train_step,280,2,train/lr,1.488415843473942e-05
356
+ 1773763519.993293,train_step,280,2,perf/step_duration_sec,5.145696292165667
357
+ 1773763519.993293,train_step,280,2,perf/samples_per_sec,6.2187890973511335
358
+ 1773763519.993293,train_step,280,2,perf/tokens_per_sec,4732.109828765628
359
+ 1773763519.993293,train_step,280,2,perf/logical_batch_size,32.0
360
+ 1773763519.993293,train_step,280,2,perf/logical_token_count,24350.0
361
+ 1773763519.993293,train_step,280,2,perf/gradient_accumulation_steps,4.0
362
+ 1773763519.993293,train_step,280,2,system/cuda_memory_allocated_gb,15.915565013885498
363
+ 1773763519.993293,train_step,280,2,system/cuda_max_memory_allocated_gb,94.47624206542969
364
+ 1773763577.139687,train_step,290,2,train/step_loss,0.9289029836654663
365
+ 1773763577.139687,train_step,290,2,train/step_real_loss,0.9481655806303024
366
+ 1773763577.139687,train_step,290,2,train/lr,1.1826665812616183e-05
367
+ 1773763577.139687,train_step,290,2,train/step_canary_loss,0.3125
368
+ 1773763577.139687,train_step,290,2,perf/step_duration_sec,5.694677841849625
369
+ 1773763577.139687,train_step,290,2,perf/samples_per_sec,5.794884437094274
370
+ 1773763577.139687,train_step,290,2,perf/tokens_per_sec,4061.3359776095867
371
+ 1773763577.139687,train_step,290,2,perf/logical_batch_size,33.0
372
+ 1773763577.139687,train_step,290,2,perf/logical_token_count,23128.0
373
+ 1773763577.139687,train_step,290,2,perf/gradient_accumulation_steps,4.0
374
+ 1773763577.139687,train_step,290,2,system/cuda_memory_allocated_gb,15.915565013885498
375
+ 1773763577.139687,train_step,290,2,system/cuda_max_memory_allocated_gb,94.47624206542969
376
+ 1773763634.8883243,train_step,300,2,train/step_loss,0.9466440713766849
377
+ 1773763634.8883243,train_step,300,2,train/step_real_loss,0.8473204374313354
378
+ 1773763634.8883243,train_step,300,2,train/lr,9.078285077691178e-06
379
+ 1773763634.8883243,train_step,300,2,train/step_canary_loss,4.125
380
+ 1773763634.8883243,train_step,300,2,perf/step_duration_sec,6.2339927861467
381
+ 1773763634.8883243,train_step,300,2,perf/samples_per_sec,5.293557617412269
382
+ 1773763634.8883243,train_step,300,2,perf/tokens_per_sec,4104.752905210987
383
+ 1773763634.8883243,train_step,300,2,perf/logical_batch_size,33.0
384
+ 1773763634.8883243,train_step,300,2,perf/logical_token_count,25589.0
385
+ 1773763634.8883243,train_step,300,2,perf/gradient_accumulation_steps,4.0
386
+ 1773763634.8883243,train_step,300,2,system/cuda_memory_allocated_gb,15.915565013885498
387
+ 1773763634.8883243,train_step,300,2,system/cuda_max_memory_allocated_gb,94.47624206542969
388
+ 1773763647.6901762,eval_step,300,2,eval/loss,0.8076710475560945
389
+ 1773763647.6901762,eval_step,300,2,eval/duration_sec,12.799929299857467
390
+ 1773763704.81611,train_step,310,2,train/step_loss,0.9043312668800354
391
+ 1773763704.81611,train_step,310,2,train/step_real_loss,0.9043312668800354
392
+ 1773763704.81611,train_step,310,2,train/lr,6.661271481537157e-06
393
+ 1773763704.81611,train_step,310,2,perf/step_duration_sec,5.423097257036716
394
+ 1773763704.81611,train_step,310,2,perf/samples_per_sec,5.900687095087322
395
+ 1773763704.81611,train_step,310,2,perf/tokens_per_sec,4037.360748341778
396
+ 1773763704.81611,train_step,310,2,perf/logical_batch_size,32.0
397
+ 1773763704.81611,train_step,310,2,perf/logical_token_count,21895.0
398
+ 1773763704.81611,train_step,310,2,perf/gradient_accumulation_steps,4.0
399
+ 1773763704.81611,train_step,310,2,system/cuda_memory_allocated_gb,15.915565013885498
400
+ 1773763704.81611,train_step,310,2,system/cuda_max_memory_allocated_gb,94.47624206542969
401
+ 1773763760.6698298,train_step,320,2,train/step_loss,0.8735850304365158
402
+ 1773763760.6698298,train_step,320,2,train/step_real_loss,0.8735850304365158
403
+ 1773763760.6698298,train_step,320,2,train/lr,4.595197001556562e-06
404
+ 1773763760.6698298,train_step,320,2,perf/step_duration_sec,5.15081740077585
405
+ 1773763760.6698298,train_step,320,2,perf/samples_per_sec,6.212606176872034
406
+ 1773763760.6698298,train_step,320,2,perf/tokens_per_sec,4842.532370928723
407
+ 1773763760.6698298,train_step,320,2,perf/logical_batch_size,32.0
408
+ 1773763760.6698298,train_step,320,2,perf/logical_token_count,24943.0
409
+ 1773763760.6698298,train_step,320,2,perf/gradient_accumulation_steps,4.0
410
+ 1773763760.6698298,train_step,320,2,system/cuda_memory_allocated_gb,15.915565013885498
411
+ 1773763760.6698298,train_step,320,2,system/cuda_max_memory_allocated_gb,94.47624206542969
412
+ 1773763818.6010072,train_step,330,2,train/step_loss,0.7620985209941864
413
+ 1773763818.6010072,train_step,330,2,train/step_real_loss,0.7620985209941864
414
+ 1773763818.6010072,train_step,330,2,train/lr,2.8967918551955297e-06
415
+ 1773763818.6010072,train_step,330,2,perf/step_duration_sec,6.058840225916356
416
+ 1773763818.6010072,train_step,330,2,perf/samples_per_sec,5.281538843543317
417
+ 1773763818.6010072,train_step,330,2,perf/tokens_per_sec,4943.025213289962
418
+ 1773763818.6010072,train_step,330,2,perf/logical_batch_size,32.0
419
+ 1773763818.6010072,train_step,330,2,perf/logical_token_count,29949.0
420
+ 1773763818.6010072,train_step,330,2,perf/gradient_accumulation_steps,4.0
421
+ 1773763818.6010072,train_step,330,2,system/cuda_memory_allocated_gb,15.915565013885498
422
+ 1773763818.6010072,train_step,330,2,system/cuda_max_memory_allocated_gb,94.47624206542969
423
+ 1773763875.2762098,train_step,340,2,train/step_loss,0.9000806212425232
424
+ 1773763875.2762098,train_step,340,2,train/step_real_loss,0.9000806212425232
425
+ 1773763875.2762098,train_step,340,2,train/lr,1.5798090255558617e-06
426
+ 1773763875.2762098,train_step,340,2,perf/step_duration_sec,5.154371100012213
427
+ 1773763875.2762098,train_step,340,2,perf/samples_per_sec,6.2083228737496565
428
+ 1773763875.2762098,train_step,340,2,perf/tokens_per_sec,4376.285595724094
429
+ 1773763875.2762098,train_step,340,2,perf/logical_batch_size,32.0
430
+ 1773763875.2762098,train_step,340,2,perf/logical_token_count,22557.0
431
+ 1773763875.2762098,train_step,340,2,perf/gradient_accumulation_steps,4.0
432
+ 1773763875.2762098,train_step,340,2,system/cuda_memory_allocated_gb,15.915565013885498
433
+ 1773763875.2762098,train_step,340,2,system/cuda_max_memory_allocated_gb,94.47624206542969
434
+ 1773763932.3082285,train_step,350,2,train/step_loss,0.7404757142066956
435
+ 1773763932.3082285,train_step,350,2,train/step_real_loss,0.7404757142066956
436
+ 1773763932.3082285,train_step,350,2,train/lr,6.54912895420573e-07
437
+ 1773763932.3082285,train_step,350,2,perf/step_duration_sec,5.694617530796677
438
+ 1773763932.3082285,train_step,350,2,perf/samples_per_sec,5.619341391575985
439
+ 1773763932.3082285,train_step,350,2,perf/tokens_per_sec,4990.501968974935
440
+ 1773763932.3082285,train_step,350,2,perf/logical_batch_size,32.0
441
+ 1773763932.3082285,train_step,350,2,perf/logical_token_count,28419.0
442
+ 1773763932.3082285,train_step,350,2,perf/gradient_accumulation_steps,4.0
443
+ 1773763932.3082285,train_step,350,2,system/cuda_memory_allocated_gb,15.915565013885498
444
+ 1773763932.3082285,train_step,350,2,system/cuda_max_memory_allocated_gb,94.47624206542969
445
+ 1773763945.1132307,eval_step,350,2,eval/loss,0.807514699605795
446
+ 1773763945.1132307,eval_step,350,2,eval/duration_sec,12.803151289001107
447
+ 1773764000.5064592,train_step,360,2,train/step_loss,0.8157700151205063
448
+ 1773764000.5064592,train_step,360,2,train/step_real_loss,0.8157700151205063
449
+ 1773764000.5064592,train_step,360,2,train/lr,1.295928914885336e-07
450
+ 1773764000.5064592,train_step,360,2,perf/step_duration_sec,5.695594378747046
451
+ 1773764000.5064592,train_step,360,2,perf/samples_per_sec,5.618377621729371
452
+ 1773764000.5064592,train_step,360,2,perf/tokens_per_sec,4667.116060650316
453
+ 1773764000.5064592,train_step,360,2,perf/logical_batch_size,32.0
454
+ 1773764000.5064592,train_step,360,2,perf/logical_token_count,26582.0
455
+ 1773764000.5064592,train_step,360,2,perf/gradient_accumulation_steps,4.0
456
+ 1773764000.5064592,train_step,360,2,system/cuda_memory_allocated_gb,15.915565013885498
457
+ 1773764000.5064592,train_step,360,2,system/cuda_max_memory_allocated_gb,94.47624206542969
458
+ 1773764058.7973218,train_epoch,368,2,train/epoch_loss,0.856036927981092
459
+ 1773764058.7973218,train_epoch,368,2,train/epoch_real_loss,0.8280306565727147
460
+ 1773764058.7973218,train_epoch,368,2,train/epoch_canary_loss,3.6806401156922846
461
+ 1773764058.7973218,train_epoch,368,2,perf/epoch_duration_sec,1096.7485609338619
462
+ 1773764058.7973218,train_epoch,368,2,perf/epoch_samples_per_sec,43.38460217306761
463
+ 1773764058.7973218,train_epoch,368,2,perf/epoch_tokens_per_sec,34225.847506959835
464
+ 1773764058.7973218,train_epoch,368,2,perf/epoch_samples,47582.0
465
+ 1773764058.7973218,train_epoch,368,2,perf/epoch_tokens,37537149.0
466
+ 1773764058.7973218,train_epoch,368,2,system/cuda_epoch_peak_memory_gb,94.47624206542969
467
+ 1773764058.7973218,train_epoch,368,2,eval/loss,0.8075161480750794
468
+ 1773764058.7973218,train_epoch,368,2,eval/duration_sec,12.857198356185108
469
+ 1773764072.6744637,audit_epoch,368,2,audit/delta,1e-05
470
+ 1773764072.6744637,audit_epoch,368,2,audit/num_canaries,500.0
471
+ 1773764072.6744637,audit_epoch,368,2,audit/num_members,250.0
472
+ 1773764072.6744637,audit_epoch,368,2,audit/paper_guess_fraction,0.2
473
+ 1773764072.6744637,audit_epoch,368,2,audit/paper_guess_steps,20.0
474
+ 1773764072.6744637,audit_epoch,368,2,audit/loss/auc,0.968584
475
+ 1773764072.6744637,audit_epoch,368,2,audit/loss/empirical_epsilon/0.05,3.4791953936219215
476
+ 1773764072.6744637,audit_epoch,368,2,audit/loss/empirical_epsilon/0.01,3.023197554051876
477
+ 1773764072.6744637,audit_epoch,368,2,audit/loss/empirical_epsilon_details/0.05/epsilon,3.4791953936219215
478
+ 1773764072.6744637,audit_epoch,368,2,audit/loss/empirical_epsilon_details/0.05/num_guesses,100.0
479
+ 1773764072.6744637,audit_epoch,368,2,audit/loss/empirical_epsilon_details/0.05/correct_guesses,100.0
480
+ 1773764072.6744637,audit_epoch,368,2,audit/loss/empirical_epsilon_details/0.01/epsilon,3.023197554051876
481
+ 1773764072.6744637,audit_epoch,368,2,audit/loss/empirical_epsilon_details/0.01/num_guesses,100.0
482
+ 1773764072.6744637,audit_epoch,368,2,audit/loss/empirical_epsilon_details/0.01/correct_guesses,100.0
483
+ 1773764072.6744637,audit_epoch,368,2,audit/embedding/auc,0.883776
484
+ 1773764072.6744637,audit_epoch,368,2,audit/embedding/empirical_epsilon/0.05,3.4791953936219215
485
+ 1773764072.6744637,audit_epoch,368,2,audit/embedding/empirical_epsilon/0.01,3.023197554051876
486
+ 1773764072.6744637,audit_epoch,368,2,audit/embedding/empirical_epsilon_details/0.05/epsilon,3.4791953936219215
487
+ 1773764072.6744637,audit_epoch,368,2,audit/embedding/empirical_epsilon_details/0.05/num_guesses,100.0
488
+ 1773764072.6744637,audit_epoch,368,2,audit/embedding/empirical_epsilon_details/0.05/correct_guesses,100.0
489
+ 1773764072.6744637,audit_epoch,368,2,audit/embedding/empirical_epsilon_details/0.01/epsilon,3.023197554051876
490
+ 1773764072.6744637,audit_epoch,368,2,audit/embedding/empirical_epsilon_details/0.01/num_guesses,100.0
491
+ 1773764072.6744637,audit_epoch,368,2,audit/embedding/empirical_epsilon_details/0.01/correct_guesses,100.0
492
+ 1773764072.6744637,audit_epoch,368,2,perf/audit_duration_sec,7.556974642910063
493
+ 1773764086.367738,audit_final,368,2,audit/delta,1e-05
494
+ 1773764086.367738,audit_final,368,2,audit/num_canaries,500.0
495
+ 1773764086.367738,audit_final,368,2,audit/num_members,250.0
496
+ 1773764086.367738,audit_final,368,2,audit/paper_guess_fraction,0.2
497
+ 1773764086.367738,audit_final,368,2,audit/paper_guess_steps,20.0
498
+ 1773764086.367738,audit_final,368,2,audit/loss/auc,0.968584
499
+ 1773764086.367738,audit_final,368,2,audit/loss/empirical_epsilon/0.05,3.4791953936219215
500
+ 1773764086.367738,audit_final,368,2,audit/loss/empirical_epsilon/0.01,3.023197554051876
501
+ 1773764086.367738,audit_final,368,2,audit/loss/empirical_epsilon_details/0.05/epsilon,3.4791953936219215
502
+ 1773764086.367738,audit_final,368,2,audit/loss/empirical_epsilon_details/0.05/num_guesses,100.0
503
+ 1773764086.367738,audit_final,368,2,audit/loss/empirical_epsilon_details/0.05/correct_guesses,100.0
504
+ 1773764086.367738,audit_final,368,2,audit/loss/empirical_epsilon_details/0.01/epsilon,3.023197554051876
505
+ 1773764086.367738,audit_final,368,2,audit/loss/empirical_epsilon_details/0.01/num_guesses,100.0
506
+ 1773764086.367738,audit_final,368,2,audit/loss/empirical_epsilon_details/0.01/correct_guesses,100.0
507
+ 1773764086.367738,audit_final,368,2,audit/embedding/auc,0.883776
508
+ 1773764086.367738,audit_final,368,2,audit/embedding/empirical_epsilon/0.05,3.4791953936219215
509
+ 1773764086.367738,audit_final,368,2,audit/embedding/empirical_epsilon/0.01,3.023197554051876
510
+ 1773764086.367738,audit_final,368,2,audit/embedding/empirical_epsilon_details/0.05/epsilon,3.4791953936219215
511
+ 1773764086.367738,audit_final,368,2,audit/embedding/empirical_epsilon_details/0.05/num_guesses,100.0
512
+ 1773764086.367738,audit_final,368,2,audit/embedding/empirical_epsilon_details/0.05/correct_guesses,100.0
513
+ 1773764086.367738,audit_final,368,2,audit/embedding/empirical_epsilon_details/0.01/epsilon,3.023197554051876
514
+ 1773764086.367738,audit_final,368,2,audit/embedding/empirical_epsilon_details/0.01/num_guesses,100.0
515
+ 1773764086.367738,audit_final,368,2,audit/embedding/empirical_epsilon_details/0.01/correct_guesses,100.0
516
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/duration,2345.9966679112986
517
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/emissions,0.09022432714096462
518
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/emissions_rate,3.8458847096868924e-05
519
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/cpu_power,72.02285277932866
520
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/gpu_power,3280.290622412428
521
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/ram_power,54.0
522
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/cpu_energy,0.045218986505725985
523
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/gpu_energy,2.137964725370466
524
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/ram_energy,0.03390259211605879
525
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/energy_consumed,2.2170863039922497
526
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/water_consumed,0.0
527
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/cpu_count,256.0
528
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/gpu_count,8.0
529
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/longitude,16.1885
530
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/latitude,58.594
531
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/ram_total_size,1511.49019241333
532
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/cpu_utilization_percent,3.3142796066695253
533
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/gpu_utilization_percent,88.58721675929884
534
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/ram_utilization_percent,5.287772552372644
535
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/ram_used_gb,79.7947571596665
536
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/pue,1.0
537
+ 1773764086.9161372,energy_final,368,,energy/codecarbon/wue,0.0
qwen3-4b-instruct/base/summary.json ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "audit/delta": 1e-05,
3
+ "audit/embedding/auc": 0.883776,
4
+ "audit/embedding/empirical_epsilon/0.01": 3.023197554051876,
5
+ "audit/embedding/empirical_epsilon/0.05": 3.4791953936219215,
6
+ "audit/embedding/empirical_epsilon_details/0.01/correct_guesses": 100.0,
7
+ "audit/embedding/empirical_epsilon_details/0.01/epsilon": 3.023197554051876,
8
+ "audit/embedding/empirical_epsilon_details/0.01/num_guesses": 100.0,
9
+ "audit/embedding/empirical_epsilon_details/0.05/correct_guesses": 100.0,
10
+ "audit/embedding/empirical_epsilon_details/0.05/epsilon": 3.4791953936219215,
11
+ "audit/embedding/empirical_epsilon_details/0.05/num_guesses": 100.0,
12
+ "audit/loss/auc": 0.968584,
13
+ "audit/loss/empirical_epsilon/0.01": 3.023197554051876,
14
+ "audit/loss/empirical_epsilon/0.05": 3.4791953936219215,
15
+ "audit/loss/empirical_epsilon_details/0.01/correct_guesses": 100.0,
16
+ "audit/loss/empirical_epsilon_details/0.01/epsilon": 3.023197554051876,
17
+ "audit/loss/empirical_epsilon_details/0.01/num_guesses": 100.0,
18
+ "audit/loss/empirical_epsilon_details/0.05/correct_guesses": 100.0,
19
+ "audit/loss/empirical_epsilon_details/0.05/epsilon": 3.4791953936219215,
20
+ "audit/loss/empirical_epsilon_details/0.05/num_guesses": 100.0,
21
+ "audit/num_canaries": 500.0,
22
+ "audit/num_members": 250.0,
23
+ "audit/paper_guess_fraction": 0.2,
24
+ "audit/paper_guess_steps": 20.0,
25
+ "energy/codecarbon/cpu_count": 256.0,
26
+ "energy/codecarbon/cpu_energy": 0.045218986505725985,
27
+ "energy/codecarbon/cpu_power": 72.02285277932866,
28
+ "energy/codecarbon/cpu_utilization_percent": 3.3142796066695253,
29
+ "energy/codecarbon/duration": 2345.9966679112986,
30
+ "energy/codecarbon/emissions": 0.09022432714096462,
31
+ "energy/codecarbon/emissions_rate": 3.8458847096868924e-05,
32
+ "energy/codecarbon/energy_consumed": 2.2170863039922497,
33
+ "energy/codecarbon/gpu_count": 8.0,
34
+ "energy/codecarbon/gpu_energy": 2.137964725370466,
35
+ "energy/codecarbon/gpu_power": 3280.290622412428,
36
+ "energy/codecarbon/gpu_utilization_percent": 88.58721675929884,
37
+ "energy/codecarbon/latitude": 58.594,
38
+ "energy/codecarbon/longitude": 16.1885,
39
+ "energy/codecarbon/pue": 1.0,
40
+ "energy/codecarbon/ram_energy": 0.03390259211605879,
41
+ "energy/codecarbon/ram_power": 54.0,
42
+ "energy/codecarbon/ram_total_size": 1511.49019241333,
43
+ "energy/codecarbon/ram_used_gb": 79.7947571596665,
44
+ "energy/codecarbon/ram_utilization_percent": 5.287772552372644,
45
+ "energy/codecarbon/water_consumed": 0.0,
46
+ "energy/codecarbon/wue": 0.0,
47
+ "eval/duration_sec": 12.857198356185108,
48
+ "eval/loss": 0.8075161480750794,
49
+ "perf/audit_duration_sec": 7.556974642910063,
50
+ "perf/epoch_duration_sec": 1096.7485609338619,
51
+ "perf/epoch_samples": 47582.0,
52
+ "perf/epoch_samples_per_sec": 43.38460217306761,
53
+ "perf/epoch_tokens": 37537149.0,
54
+ "perf/epoch_tokens_per_sec": 34225.847506959835,
55
+ "perf/gradient_accumulation_steps": 4.0,
56
+ "perf/logical_batch_size": 32.0,
57
+ "perf/logical_token_count": 26582.0,
58
+ "perf/samples_per_sec": 5.618377621729371,
59
+ "perf/step_duration_sec": 5.695594378747046,
60
+ "perf/tokens_per_sec": 4667.116060650316,
61
+ "system/cuda_epoch_peak_memory_gb": 94.47624206542969,
62
+ "system/cuda_max_memory_allocated_gb": 94.47624206542969,
63
+ "system/cuda_memory_allocated_gb": 15.915565013885498,
64
+ "train/epoch_canary_loss": 3.6806401156922846,
65
+ "train/epoch_loss": 0.856036927981092,
66
+ "train/epoch_real_loss": 0.8280306565727147,
67
+ "train/lr": 1.295928914885336e-07,
68
+ "train/step_canary_loss": 4.125,
69
+ "train/step_loss": 0.8157700151205063,
70
+ "train/step_real_loss": 0.8157700151205063
71
+ }
qwen3-4b-instruct/base/tensorboard/events.out.tfevents.1773761739.7b654b6988b0.32156.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b55227175e1ecbcf79b12550edd079956b7869b97adc97b1aadf0a02287d39a
3
+ size 36378
qwen3-4b-instruct/base/tokenizer/chat_template.jinja ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0].role == 'system' %}
4
+ {{- messages[0].content + '\n\n' }}
5
+ {%- endif %}
6
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
+ {%- for tool in tools %}
8
+ {{- "\n" }}
9
+ {{- tool | tojson }}
10
+ {%- endfor %}
11
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
+ {%- else %}
13
+ {%- if messages[0].role == 'system' %}
14
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
+ {%- endif %}
16
+ {%- endif %}
17
+ {%- for message in messages %}
18
+ {%- if message.content is string %}
19
+ {%- set content = message.content %}
20
+ {%- else %}
21
+ {%- set content = '' %}
22
+ {%- endif %}
23
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
24
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
25
+ {%- elif message.role == "assistant" %}
26
+ {{- '<|im_start|>' + message.role + '\n' + content }}
27
+ {%- if message.tool_calls %}
28
+ {%- for tool_call in message.tool_calls %}
29
+ {%- if (loop.first and content) or (not loop.first) %}
30
+ {{- '\n' }}
31
+ {%- endif %}
32
+ {%- if tool_call.function %}
33
+ {%- set tool_call = tool_call.function %}
34
+ {%- endif %}
35
+ {{- '<tool_call>\n{"name": "' }}
36
+ {{- tool_call.name }}
37
+ {{- '", "arguments": ' }}
38
+ {%- if tool_call.arguments is string %}
39
+ {{- tool_call.arguments }}
40
+ {%- else %}
41
+ {{- tool_call.arguments | tojson }}
42
+ {%- endif %}
43
+ {{- '}\n</tool_call>' }}
44
+ {%- endfor %}
45
+ {%- endif %}
46
+ {{- '<|im_end|>\n' }}
47
+ {%- elif message.role == "tool" %}
48
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
49
+ {{- '<|im_start|>user' }}
50
+ {%- endif %}
51
+ {{- '\n<tool_response>\n' }}
52
+ {{- content }}
53
+ {{- '\n</tool_response>' }}
54
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
55
+ {{- '<|im_end|>\n' }}
56
+ {%- endif %}
57
+ {%- endif %}
58
+ {%- endfor %}
59
+ {%- if add_generation_prompt %}
60
+ {{- '<|im_start|>assistant\n' }}
61
+ {%- endif %}
qwen3-4b-instruct/base/tokenizer/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e9c8aef460c70c1e1c32afe895f455856c0075e5706f06e6d80b2f581137715
3
+ size 11517150
qwen3-4b-instruct/base/tokenizer/tokenizer_config.json ADDED
@@ -0,0 +1,516 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "backend": "tokenizers",
4
+ "bos_token": null,
5
+ "clean_up_tokenization_spaces": false,
6
+ "eos_token": "<|im_end|>",
7
+ "errors": "replace",
8
+ "extra_special_tokens": [
9
+ "865331112869",
10
+ "569765693871",
11
+ "485177821815",
12
+ "135441121756",
13
+ "367459894796",
14
+ "877482678543",
15
+ "457919547633",
16
+ "765474393376",
17
+ "114848338811",
18
+ "746285987371",
19
+ "649291669397",
20
+ "927914615679",
21
+ "445925149649",
22
+ "691587454538",
23
+ "143777992227",
24
+ "997981281989",
25
+ "425949483533",
26
+ "982993456429",
27
+ "718726519731",
28
+ "172599315861",
29
+ "643489267333",
30
+ "282322838685",
31
+ "781653545886",
32
+ "796415361892",
33
+ "841991688488",
34
+ "211411365397",
35
+ "698218415444",
36
+ "355977139358",
37
+ "682564697312",
38
+ "383837596997",
39
+ "689362171782",
40
+ "749966767285",
41
+ "753159165157",
42
+ "795693824762",
43
+ "669689115557",
44
+ "327491773134",
45
+ "983569279932",
46
+ "612128769512",
47
+ "374327157578",
48
+ "311632789559",
49
+ "523918658846",
50
+ "765981581453",
51
+ "794825141891",
52
+ "873898736873",
53
+ "447445629421",
54
+ "473822473819",
55
+ "181439694557",
56
+ "592538279337",
57
+ "668134915514",
58
+ "643692393748",
59
+ "696651276628",
60
+ "853859348234",
61
+ "778466723723",
62
+ "929826356991",
63
+ "272362973463",
64
+ "694235616268",
65
+ "281673864127",
66
+ "479676316326",
67
+ "646979124677",
68
+ "922327493433",
69
+ "883685933161",
70
+ "264259917554",
71
+ "836746273134",
72
+ "658481324922",
73
+ "481884157827",
74
+ "587787496812",
75
+ "579184949249",
76
+ "912193598348",
77
+ "529679678956",
78
+ "795838284624",
79
+ "159337222655",
80
+ "173781362446",
81
+ "773687856563",
82
+ "535787224917",
83
+ "351885857332",
84
+ "578827344666",
85
+ "198462689911",
86
+ "722618266242",
87
+ "952872416512",
88
+ "517778845323",
89
+ "749665846687",
90
+ "661436365453",
91
+ "259666844669",
92
+ "242851284913",
93
+ "514532995959",
94
+ "161588262349",
95
+ "742765629356",
96
+ "225164373623",
97
+ "676539973863",
98
+ "826214551218",
99
+ "182345464792",
100
+ "232776999554",
101
+ "337326533813",
102
+ "676676697292",
103
+ "929185622831",
104
+ "545512344383",
105
+ "499444466686",
106
+ "314697386682",
107
+ "517379856925",
108
+ "379557332953",
109
+ "614797267726",
110
+ "429781429464",
111
+ "922466849763",
112
+ "721737645236",
113
+ "479227349997",
114
+ "136931728327",
115
+ "259533577263",
116
+ "488538864842",
117
+ "937495658852",
118
+ "489991411364",
119
+ "499148455254",
120
+ "441373944925",
121
+ "899151413682",
122
+ "467893531755",
123
+ "527117488925",
124
+ "928335588653",
125
+ "374439448821",
126
+ "879425227932",
127
+ "867678158885",
128
+ "399749397872",
129
+ "129693547287",
130
+ "689285841825",
131
+ "771619544974",
132
+ "724883568652",
133
+ "516968424863",
134
+ "733737988257",
135
+ "852347289392",
136
+ "296953381169",
137
+ "377273562477",
138
+ "262296912232",
139
+ "547149832394",
140
+ "298464134954",
141
+ "216667245274",
142
+ "843998562287",
143
+ "572154333646",
144
+ "124589118494",
145
+ "841824384614",
146
+ "232896526252",
147
+ "295448593321",
148
+ "123741461297",
149
+ "653573457168",
150
+ "196735786156",
151
+ "377338713663",
152
+ "964342468552",
153
+ "586855179568",
154
+ "484773717614",
155
+ "894885246797",
156
+ "677896358599",
157
+ "848845611563",
158
+ "851852651677",
159
+ "398549545767",
160
+ "454244839926",
161
+ "799364566435",
162
+ "967114116556",
163
+ "817378986438",
164
+ "233795848681",
165
+ "824387273757",
166
+ "916198946615",
167
+ "563117729724",
168
+ "951794811935",
169
+ "374598961236",
170
+ "922867396683",
171
+ "765737843639",
172
+ "175469284871",
173
+ "231853711778",
174
+ "662426712668",
175
+ "711412347158",
176
+ "753466987363",
177
+ "513361312532",
178
+ "712992815957",
179
+ "971621888444",
180
+ "829235161526",
181
+ "585544633356",
182
+ "582471228164",
183
+ "678666359123",
184
+ "557533689478",
185
+ "632962475133",
186
+ "484489193824",
187
+ "489562189822",
188
+ "589547936288",
189
+ "363214487524",
190
+ "244885399387",
191
+ "431751228368",
192
+ "433581868192",
193
+ "486391569221",
194
+ "185438575221",
195
+ "126574388585",
196
+ "741757479784",
197
+ "529854679937",
198
+ "996116119839",
199
+ "616248973917",
200
+ "763531783491",
201
+ "955456118295",
202
+ "364196983365",
203
+ "195792996468",
204
+ "151859598873",
205
+ "399223169721",
206
+ "938488813964",
207
+ "961981959227",
208
+ "183368827562",
209
+ "533417736566",
210
+ "786391632558",
211
+ "665661658354",
212
+ "693281533643",
213
+ "475794684356",
214
+ "652154162978",
215
+ "753233719644",
216
+ "668514843129",
217
+ "819162623892",
218
+ "941169431859",
219
+ "877385381798",
220
+ "752644929761",
221
+ "881136466196",
222
+ "275597777299",
223
+ "731681792655",
224
+ "961133895172",
225
+ "864718285734",
226
+ "963852916563",
227
+ "319584985416",
228
+ "563365646341",
229
+ "811371928234",
230
+ "837131396371",
231
+ "267514771964",
232
+ "944513428457",
233
+ "117298239631",
234
+ "158142752582",
235
+ "252867443568",
236
+ "839269684865",
237
+ "612788593128",
238
+ "145669731981",
239
+ "121557291859",
240
+ "245416776926",
241
+ "799417897197",
242
+ "997958836435",
243
+ "892336777248",
244
+ "158929292238",
245
+ "581976444672",
246
+ "897784492783",
247
+ "492373714791",
248
+ "512659818733",
249
+ "881112998642",
250
+ "619454958782",
251
+ "431149748713",
252
+ "624221476921",
253
+ "125866399464",
254
+ "339882449689",
255
+ "186198784585",
256
+ "943193294691",
257
+ "955668961269",
258
+ "232787996724",
259
+ "215671314196",
260
+ "286173241916",
261
+ "745977673725",
262
+ "556976448182",
263
+ "599961512792",
264
+ "766294538337",
265
+ "934912591213",
266
+ "295118729589",
267
+ "529455466433",
268
+ "196119929397",
269
+ "379571934299",
270
+ "251789649997",
271
+ "564544131355",
272
+ "244371196654",
273
+ "384598329253",
274
+ "887753195844",
275
+ "364947325679",
276
+ "655517954651",
277
+ "673948786567",
278
+ "857231548835",
279
+ "816115936673",
280
+ "644234165531",
281
+ "182782912224",
282
+ "234316622259",
283
+ "421369185549",
284
+ "434632855397",
285
+ "921889371893",
286
+ "415956914763",
287
+ "598916996413",
288
+ "773671349113",
289
+ "952465217972",
290
+ "117657531962",
291
+ "729825168745",
292
+ "691315125346",
293
+ "768461952319",
294
+ "664847713559",
295
+ "953267689786",
296
+ "886464195129",
297
+ "824488329416",
298
+ "837873762491",
299
+ "532833541879",
300
+ "669183782449",
301
+ "941976537588",
302
+ "739394546916",
303
+ "267954879268",
304
+ "637551427887",
305
+ "217756494954",
306
+ "524444658383",
307
+ "117783274348",
308
+ "138218735276",
309
+ "814611949491",
310
+ "711641973413",
311
+ "499156317423",
312
+ "515856611931",
313
+ "454164859837",
314
+ "345271433112",
315
+ "462294118988",
316
+ "511785788222",
317
+ "497294727353",
318
+ "866519986723",
319
+ "334513529294",
320
+ "549946382131",
321
+ "284445431422",
322
+ "396521188476",
323
+ "421435255895",
324
+ "133373659361",
325
+ "322683334381",
326
+ "228358422847",
327
+ "291762694874",
328
+ "143182978129",
329
+ "511923256573",
330
+ "327158398268",
331
+ "879764613759",
332
+ "564395222747",
333
+ "451161679736",
334
+ "538631466654",
335
+ "221762325616",
336
+ "218391991184",
337
+ "322589379462",
338
+ "876537814263",
339
+ "152676556624",
340
+ "332522971941",
341
+ "884354318946",
342
+ "513349618943",
343
+ "116639746413",
344
+ "635185846287",
345
+ "993832498489",
346
+ "813981174797",
347
+ "438745114173",
348
+ "983493951323",
349
+ "724492262421",
350
+ "622553389126",
351
+ "889965243135",
352
+ "364492359246",
353
+ "154962668224",
354
+ "179564995814",
355
+ "418412875665",
356
+ "718951851413",
357
+ "699446724178",
358
+ "624266421831",
359
+ "815458725125",
360
+ "455423278865",
361
+ "393741199486",
362
+ "328552864359",
363
+ "211662639865",
364
+ "218784516525",
365
+ "762486672996",
366
+ "142799718159",
367
+ "858146415154",
368
+ "767858144912",
369
+ "571317457151",
370
+ "635127952696",
371
+ "116427191984",
372
+ "268921994538",
373
+ "523937669294",
374
+ "165429152138",
375
+ "739246183345",
376
+ "591464355756",
377
+ "212985874612",
378
+ "191887635211",
379
+ "967214577653",
380
+ "119342152414",
381
+ "946444632795",
382
+ "618423867817",
383
+ "228565148417",
384
+ "729116422489",
385
+ "527874729936",
386
+ "739784153482",
387
+ "387763951128",
388
+ "331369926711",
389
+ "562716493614",
390
+ "739667844957",
391
+ "562389434565",
392
+ "256497188281",
393
+ "859927364588",
394
+ "417668946583",
395
+ "357621613582",
396
+ "438435178228",
397
+ "485692541169",
398
+ "825815739116",
399
+ "342221452223",
400
+ "697747991249",
401
+ "716763689965",
402
+ "141499982867",
403
+ "818479319499",
404
+ "336813343298",
405
+ "594688742928",
406
+ "472129283475",
407
+ "514354144759",
408
+ "349249721685",
409
+ "546276298359",
410
+ "353755529131",
411
+ "315534574435",
412
+ "523723475786",
413
+ "215826764872",
414
+ "367968398551",
415
+ "569853653352",
416
+ "389715484387",
417
+ "293847485454",
418
+ "714738141818",
419
+ "178478368922",
420
+ "581493616981",
421
+ "589439538674",
422
+ "846657726193",
423
+ "722339992679",
424
+ "138154781148",
425
+ "757785319772",
426
+ "492516914298",
427
+ "919181521716",
428
+ "985781138935",
429
+ "476969195485",
430
+ "313145133463",
431
+ "758963111966",
432
+ "147541537162",
433
+ "557163366873",
434
+ "144373897488",
435
+ "522515164754",
436
+ "724964923582",
437
+ "284776712475",
438
+ "375429755114",
439
+ "181233596124",
440
+ "948585673431",
441
+ "243165586174",
442
+ "396847976144",
443
+ "997724962668",
444
+ "558837194455",
445
+ "163165456396",
446
+ "378749551722",
447
+ "161238482259",
448
+ "754978243758",
449
+ "195388849133",
450
+ "229775525672",
451
+ "262437452884",
452
+ "441377892146",
453
+ "451885565366",
454
+ "981277526855",
455
+ "762495822823",
456
+ "368763327262",
457
+ "757422791351",
458
+ "636324136426",
459
+ "214193645583",
460
+ "412843856172",
461
+ "179386156569",
462
+ "756916173536",
463
+ "892697125149",
464
+ "625334487352",
465
+ "941861857715",
466
+ "887417525236",
467
+ "649516938598",
468
+ "717628619782",
469
+ "438124184139",
470
+ "547563892268",
471
+ "856317483891",
472
+ "313313831273",
473
+ "371496153876",
474
+ "587541149322",
475
+ "265847332563",
476
+ "449549215429",
477
+ "163497196769",
478
+ "861342291298",
479
+ "268433315926",
480
+ "774679513717",
481
+ "851254219729",
482
+ "583527834464",
483
+ "488496781997",
484
+ "556814553861",
485
+ "482829231639",
486
+ "618878266619",
487
+ "147444452794",
488
+ "949235426629",
489
+ "357299947518",
490
+ "175528632226",
491
+ "645527857972",
492
+ "186872457894",
493
+ "552738847828",
494
+ "626748382482",
495
+ "921894985642",
496
+ "943878645871",
497
+ "859289776479",
498
+ "614583493135",
499
+ "933775286797",
500
+ "332234613346",
501
+ "325196781219",
502
+ "142526557681",
503
+ "356722692178",
504
+ "449318681694",
505
+ "687284547244",
506
+ "947262995132",
507
+ "893974619684",
508
+ "797238311233"
509
+ ],
510
+ "is_local": false,
511
+ "model_max_length": 1010000,
512
+ "pad_token": "<|endoftext|>",
513
+ "split_special_tokens": false,
514
+ "tokenizer_class": "Qwen2Tokenizer",
515
+ "unk_token": null
516
+ }
qwen3-4b-instruct/base/train.log ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-03-17 15:38:14,810 [INFO] new_opacus_codex.train_steps: epoch=1 step=10 loss=1.2991
2
+ 2026-03-17 15:39:10,798 [INFO] new_opacus_codex.train_steps: epoch=1 step=20 loss=1.1180
3
+ 2026-03-17 15:40:07,300 [INFO] new_opacus_codex.train_steps: epoch=1 step=30 loss=1.1701
4
+ 2026-03-17 15:41:05,156 [INFO] new_opacus_codex.train_steps: epoch=1 step=40 loss=1.0721
5
+ 2026-03-17 15:42:01,272 [INFO] new_opacus_codex.train_steps: epoch=1 step=50 loss=0.9707
6
+ 2026-03-17 15:42:14,073 [INFO] new_opacus_codex.train_steps: eval event=eval_step epoch=1 step=50 eval_loss=0.8534 duration_sec=12.80
7
+ 2026-03-17 15:43:10,275 [INFO] new_opacus_codex.train_steps: epoch=1 step=60 loss=1.0486
8
+ 2026-03-17 15:44:09,206 [INFO] new_opacus_codex.train_steps: epoch=1 step=70 loss=0.9598
9
+ 2026-03-17 15:45:05,719 [INFO] new_opacus_codex.train_steps: epoch=1 step=80 loss=0.9536
10
+ 2026-03-17 15:46:03,202 [INFO] new_opacus_codex.train_steps: epoch=1 step=90 loss=0.8977
11
+ 2026-03-17 15:46:59,339 [INFO] new_opacus_codex.train_steps: epoch=1 step=100 loss=0.9494
12
+ 2026-03-17 15:47:12,127 [INFO] new_opacus_codex.train_steps: eval event=eval_step epoch=1 step=100 eval_loss=0.8295 duration_sec=12.79
13
+ 2026-03-17 15:48:09,329 [INFO] new_opacus_codex.train_steps: epoch=1 step=110 loss=0.9597
14
+ 2026-03-17 15:49:04,919 [INFO] new_opacus_codex.train_steps: epoch=1 step=120 loss=0.9211
15
+ 2026-03-17 15:50:01,866 [INFO] new_opacus_codex.train_steps: epoch=1 step=130 loss=0.9734
16
+ 2026-03-17 15:50:59,563 [INFO] new_opacus_codex.train_steps: epoch=1 step=140 loss=0.9642
17
+ 2026-03-17 15:51:57,620 [INFO] new_opacus_codex.train_steps: epoch=1 step=150 loss=0.9605
18
+ 2026-03-17 15:52:10,416 [INFO] new_opacus_codex.train_steps: eval event=eval_step epoch=1 step=150 eval_loss=0.8174 duration_sec=12.79
19
+ 2026-03-17 15:53:05,969 [INFO] new_opacus_codex.train_steps: epoch=1 step=160 loss=0.8810
20
+ 2026-03-17 15:54:01,078 [INFO] new_opacus_codex.train_steps: epoch=1 step=170 loss=0.9470
21
+ 2026-03-17 15:54:58,843 [INFO] new_opacus_codex.train_steps: epoch=1 step=180 loss=0.8985
22
+ 2026-03-17 15:56:24,331 [INFO] new_opacus_codex.train_steps: epoch=2 step=190 loss=0.8604
23
+ 2026-03-17 15:57:20,715 [INFO] new_opacus_codex.train_steps: epoch=2 step=200 loss=0.8534
24
+ 2026-03-17 15:57:33,520 [INFO] new_opacus_codex.train_steps: eval event=eval_step epoch=2 step=200 eval_loss=0.8111 duration_sec=12.80
25
+ 2026-03-17 15:58:30,153 [INFO] new_opacus_codex.train_steps: epoch=2 step=210 loss=0.8451
26
+ 2026-03-17 15:59:28,110 [INFO] new_opacus_codex.train_steps: epoch=2 step=220 loss=0.8697
27
+ 2026-03-17 16:00:24,220 [INFO] new_opacus_codex.train_steps: epoch=2 step=230 loss=0.8622
28
+ 2026-03-17 16:01:22,063 [INFO] new_opacus_codex.train_steps: epoch=2 step=240 loss=0.8586
29
+ 2026-03-17 16:02:17,716 [INFO] new_opacus_codex.train_steps: epoch=2 step=250 loss=0.8370
30
+ 2026-03-17 16:02:30,526 [INFO] new_opacus_codex.train_steps: eval event=eval_step epoch=2 step=250 eval_loss=0.8086 duration_sec=12.81
31
+ 2026-03-17 16:03:26,908 [INFO] new_opacus_codex.train_steps: epoch=2 step=260 loss=0.8445
32
+ 2026-03-17 16:04:22,858 [INFO] new_opacus_codex.train_steps: epoch=2 step=270 loss=0.8807
33
+ 2026-03-17 16:05:19,992 [INFO] new_opacus_codex.train_steps: epoch=2 step=280 loss=0.8859
34
+ 2026-03-17 16:06:17,139 [INFO] new_opacus_codex.train_steps: epoch=2 step=290 loss=0.8174
35
+ 2026-03-17 16:07:14,887 [INFO] new_opacus_codex.train_steps: epoch=2 step=300 loss=0.8824
36
+ 2026-03-17 16:07:27,689 [INFO] new_opacus_codex.train_steps: eval event=eval_step epoch=2 step=300 eval_loss=0.8077 duration_sec=12.80
37
+ 2026-03-17 16:08:24,815 [INFO] new_opacus_codex.train_steps: epoch=2 step=310 loss=0.8392
38
+ 2026-03-17 16:09:20,669 [INFO] new_opacus_codex.train_steps: epoch=2 step=320 loss=0.8540
39
+ 2026-03-17 16:10:18,600 [INFO] new_opacus_codex.train_steps: epoch=2 step=330 loss=0.8799
40
+ 2026-03-17 16:11:15,275 [INFO] new_opacus_codex.train_steps: epoch=2 step=340 loss=0.8421
41
+ 2026-03-17 16:12:12,307 [INFO] new_opacus_codex.train_steps: epoch=2 step=350 loss=0.9073
42
+ 2026-03-17 16:12:25,113 [INFO] new_opacus_codex.train_steps: eval event=eval_step epoch=2 step=350 eval_loss=0.8075 duration_sec=12.80
43
+ 2026-03-17 16:13:20,505 [INFO] new_opacus_codex.train_steps: epoch=2 step=360 loss=0.8343