scthornton commited on
Commit
a3f8343
·
verified ·
1 Parent(s): d64e837

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,181 +1,62 @@
1
  ---
2
- library_name: peft
3
- license: gemma
4
  base_model: google/gemma-4-26b-a4b-it
 
 
5
  tags:
6
- - security
7
- - secure-code
8
- - cybersecurity
9
- - qlora
10
- - gemma4
11
- - code-generation
12
- - owasp
13
- - ai-security
14
- datasets:
15
- - scthornton/securecode
16
- - scthornton/securecode-web
17
  pipeline_tag: text-generation
18
- model-index:
19
- - name: gemma4-26b-securecode
20
- results: []
21
  ---
22
 
23
- # Gemma 4 26B-A4B SecureCode
24
-
25
- **Security-specialized code generation model** fine-tuned on the [SecureCode](https://huggingface.co/datasets/scthornton/securecode) and [SecureCode Web](https://huggingface.co/datasets/scthornton/securecode-web) datasets.
26
-
27
- Part of the [SecureCode model collection](https://huggingface.co/collections/scthornton/securecode) by [perfecXion.ai](https://perfecxion.ai).
28
-
29
- ## Model Details
30
-
31
- | Property | Value |
32
- |----------|-------|
33
- | **Base Model** | [google/gemma-4-26b-a4b-it](https://huggingface.co/google/gemma-4-26b-a4b-it) |
34
- | **Architecture** | Gemma 4 Mixture-of-Experts (26B total, 4B active per token) |
35
- | **Method** | QLoRA (4-bit NormalFloat quantization) |
36
- | **Parameters Trained** | ~1-2% via LoRA adapters |
37
- | **Tier** | Tier 3: Large Security Specialist |
38
-
39
- ## Training Configuration
40
-
41
- ### QLoRA Settings
42
-
43
- | Parameter | Value |
44
- |-----------|-------|
45
- | Quantization | 4-bit NormalFloat (NF4) |
46
- | Compute Dtype | bfloat16 |
47
- | Double Quantization | Enabled |
48
- | LoRA Rank | 16 |
49
- | LoRA Alpha | 32 |
50
- | LoRA Dropout | 0.05 |
51
- | Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
52
-
53
- ### Training Hyperparameters
54
-
55
- | Parameter | Value |
56
- |-----------|-------|
57
- | Learning Rate | 2e-4 |
58
- | LR Scheduler | Cosine with 100-step warmup |
59
- | Epochs | 3 |
60
- | Per-device Batch Size | 2 |
61
- | Gradient Accumulation | 8x |
62
- | Effective Batch Size | 16 |
63
- | Max Sequence Length | 4,096 tokens |
64
- | Optimizer | paged_adamw_8bit |
65
- | Precision | bf16 |
66
-
67
- ### Hardware
68
 
69
- | Component | Specification |
70
- |-----------|--------------|
71
- | System | NVIDIA DGX Spark |
72
- | GPU | NVIDIA GB10 |
73
- | Memory | 128 GB Unified (CPU/GPU) |
74
 
75
- ## Training Data
76
-
77
- Combined and deduplicated from two datasets:
78
-
79
- | Dataset | Examples | Focus |
80
- |---------|----------|-------|
81
- | [scthornton/securecode](https://huggingface.co/datasets/scthornton/securecode) | 2,185 | Web + AI/ML security (OWASP Top 10 2021 + LLM Top 10 2025) |
82
- | [scthornton/securecode-web](https://huggingface.co/datasets/scthornton/securecode-web) | 1,378 | Web security with framework-specific patterns |
83
-
84
- ### Coverage
85
-
86
- **Vulnerability Standards:**
87
- - OWASP Top 10 2021 (Web/Application Security)
88
- - OWASP LLM Top 10 2025 (AI/ML Security)
89
- - 92+ CWEs mapped
90
-
91
- **Programming Languages:** Python, JavaScript, Java, Go, PHP, TypeScript, C#, Ruby, Rust, Kotlin, YAML, HCL
92
-
93
- **Frameworks:** 49+ including LangChain, OpenAI, Anthropic, HuggingFace, Django, Express.js, Spring Boot, FastAPI, and more
94
-
95
- **Training Format:** 4-turn conversational examples:
96
- 1. Developer asks about implementing a feature
97
- 2. Assistant provides vulnerable + secure implementations with attack demonstrations
98
- 3. Developer asks about testing and edge cases
99
- 4. Assistant delivers defense-in-depth operational guidance
100
-
101
- Every example is grounded in real CVEs and published security incidents.
102
-
103
- ## Usage
104
 
105
  ```python
106
- from peft import PeftModel
107
- from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
108
- import torch
109
-
110
- # Load with 4-bit quantization (matches training)
111
- bnb_config = BitsAndBytesConfig(
112
- load_in_4bit=True,
113
- bnb_4bit_quant_type="nf4",
114
- bnb_4bit_compute_dtype=torch.bfloat16,
115
- )
116
-
117
- base_model = AutoModelForCausalLM.from_pretrained(
118
- "google/gemma-4-26b-a4b-it",
119
- quantization_config=bnb_config,
120
- device_map="auto",
121
- )
122
- tokenizer = AutoTokenizer.from_pretrained("scthornton/gemma4-26b-securecode")
123
- model = PeftModel.from_pretrained(base_model, "scthornton/gemma4-26b-securecode")
124
-
125
- messages = [
126
- {"role": "user", "content": "How do I implement JWT authentication with refresh tokens in Python?"}
127
- ]
128
-
129
- inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
130
- outputs = model.generate(inputs, max_new_tokens=2048, temperature=0.7)
131
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
132
  ```
133
 
134
- ## What Makes This Different
135
 
136
- Standard code models generate functional but often **insecure** code. SecureCode-trained models:
137
 
138
- - Generate **secure implementations by default** with proper input validation, parameterized queries, and cryptographic best practices
139
- - Provide **vulnerable AND secure** code side-by-side so developers understand the risk
140
- - Include **defense-in-depth guidance**: logging, monitoring, SIEM integration, and infrastructure hardening
141
- - Cover **AI/ML-specific vulnerabilities**: prompt injection defenses, RAG security, model supply chain protection
142
 
143
- ## SecureCode Model Collection
144
 
145
- | Model | Parameters | Base |
146
- |-------|-----------|------|
147
- | [llama-3.2-3b-securecode](https://huggingface.co/scthornton/llama-3.2-3b-securecode) | 3B | Llama 3.2 3B |
148
- | [codegemma-7b-securecode](https://huggingface.co/scthornton/codegemma-7b-securecode) | 7B | CodeGemma 7B IT |
149
- | [deepseek-coder-6.7b-securecode](https://huggingface.co/scthornton/deepseek-coder-6.7b-securecode) | 6.7B | DeepSeek Coder |
150
- | [qwen-coder-7b-securecode](https://huggingface.co/scthornton/qwen-coder-7b-securecode) | 7B | Qwen Coder 7B |
151
- | [codellama-13b-securecode](https://huggingface.co/scthornton/codellama-13b-securecode) | 13B | Code Llama 13B |
152
- | [qwen2.5-coder-14b-securecode](https://huggingface.co/scthornton/qwen2.5-coder-14b-securecode) | 14B | Qwen 2.5 Coder 14B |
153
- | [starcoder2-15b-securecode](https://huggingface.co/scthornton/starcoder2-15b-securecode) | 15B | StarCoder2 15B |
154
- | [granite-20b-code-securecode](https://huggingface.co/scthornton/granite-20b-code-securecode) | 20B | Granite 20B Code |
155
- | **gemma4-26b-securecode** | **26B (4B active)** | **Gemma 4 26B-A4B IT** |
156
 
157
- ## Limitations
158
 
159
- - Training data focuses on defensive security patterns; not designed for offensive security tooling
160
- - 4-turn conversation format may not generalize to all coding interaction patterns
161
- - MoE architecture means only 4B parameters are active per token despite 26B total
162
- - Security guidance reflects best practices as of early 2026; new vulnerabilities may not be covered
 
 
163
 
164
- ## License
165
 
166
- - **Model:** Gemma license (inherited from base model)
167
- - **Dataset:** CC BY-NC-SA 4.0
168
- - **Adapters:** CC BY-NC-SA 4.0
169
 
170
- ## Citation
171
 
 
 
172
  ```bibtex
173
- @misc{thornton2026securecode,
174
- title={SecureCode: A Production-Grade Multi-Turn Dataset for Training Security-Aware Code Generation Models},
175
- author={Thornton, Scott},
176
- year={2026},
177
- publisher={perfecXion.ai},
178
- url={https://huggingface.co/datasets/scthornton/securecode},
179
- note={arXiv:2512.18542}
180
  }
181
- ```
 
1
  ---
 
 
2
  base_model: google/gemma-4-26b-a4b-it
3
+ library_name: peft
4
+ model_name: gemma4-26b-securecode
5
  tags:
6
+ - base_model:adapter:google/gemma-4-26b-a4b-it
7
+ - lora
8
+ - sft
9
+ - transformers
10
+ - trl
11
+ licence: license
 
 
 
 
 
12
  pipeline_tag: text-generation
 
 
 
13
  ---
14
 
15
+ # Model Card for gemma4-26b-securecode
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
+ This model is a fine-tuned version of [google/gemma-4-26b-a4b-it](https://huggingface.co/google/gemma-4-26b-a4b-it).
18
+ It has been trained using [TRL](https://github.com/huggingface/trl).
 
 
 
19
 
20
+ ## Quick start
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  ```python
23
+ from transformers import pipeline
24
+
25
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
26
+ generator = pipeline("text-generation", model="None", device="cuda")
27
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
28
+ print(output["generated_text"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ```
30
 
31
+ ## Training procedure
32
 
33
+
34
 
 
 
 
 
35
 
 
36
 
37
+ This model was trained with SFT.
 
 
 
 
 
 
 
 
 
 
38
 
39
+ ### Framework versions
40
 
41
+ - PEFT 0.18.2.dev0
42
+ - TRL: 1.0.0
43
+ - Transformers: 5.5.0
44
+ - Pytorch: 2.7.1+cu128
45
+ - Datasets: 4.8.4
46
+ - Tokenizers: 0.22.2
47
 
48
+ ## Citations
49
 
 
 
 
50
 
 
51
 
52
+ Cite TRL as:
53
+
54
  ```bibtex
55
+ @software{vonwerra2020trl,
56
+ title = {{TRL: Transformers Reinforcement Learning}},
57
+ author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
58
+ license = {Apache-2.0},
59
+ url = {https://github.com/huggingface/trl},
60
+ year = {2020}
 
61
  }
62
+ ```
adapter_config.json CHANGED
@@ -24,217 +24,217 @@
24
  "megatron_core": "megatron.core",
25
  "modules_to_save": null,
26
  "peft_type": "LORA",
27
- "peft_version": "0.18.2.dev0@e7355a3b2233820f6f30e558ce133ed22673a087",
28
  "qalora_group_size": 16,
29
  "r": 16,
30
  "rank_pattern": {},
31
  "revision": null,
32
  "target_modules": [
33
- "model.language_model.layers.4.self_attn.k_proj",
34
- "model.language_model.layers.17.self_attn.o_proj",
35
- "model.language_model.layers.3.mlp.up_proj",
 
 
 
 
 
 
36
  "model.language_model.layers.17.mlp.up_proj",
37
- "model.language_model.layers.8.mlp.down_proj",
 
 
 
 
 
38
  "model.language_model.layers.27.self_attn.k_proj",
39
- "model.language_model.layers.28.mlp.down_proj",
40
- "model.language_model.layers.6.mlp.up_proj",
41
- "model.language_model.layers.24.self_attn.k_proj",
42
- "model.language_model.layers.6.self_attn.q_proj",
43
- "model.language_model.layers.17.self_attn.q_proj",
44
- "model.language_model.layers.15.self_attn.k_proj",
45
- "model.language_model.layers.24.mlp.up_proj",
46
- "model.language_model.layers.19.mlp.gate_proj",
47
- "model.language_model.layers.16.self_attn.k_proj",
48
- "model.language_model.layers.26.self_attn.q_proj",
49
- "model.language_model.layers.21.mlp.up_proj",
50
- "model.language_model.layers.17.mlp.down_proj",
51
- "model.language_model.layers.10.self_attn.v_proj",
52
- "model.language_model.layers.25.mlp.down_proj",
53
- "model.language_model.layers.11.mlp.up_proj",
54
- "model.language_model.layers.2.self_attn.o_proj",
55
- "model.language_model.layers.15.mlp.down_proj",
56
- "model.language_model.layers.10.self_attn.k_proj",
57
- "model.language_model.layers.15.self_attn.q_proj",
58
- "model.language_model.layers.9.self_attn.v_proj",
59
- "model.language_model.layers.27.self_attn.o_proj",
60
- "model.language_model.layers.3.self_attn.v_proj",
61
- "model.language_model.layers.10.self_attn.q_proj",
62
- "model.language_model.layers.21.mlp.gate_proj",
63
- "model.language_model.layers.25.self_attn.q_proj",
64
  "model.language_model.layers.5.self_attn.o_proj",
 
 
65
  "model.language_model.layers.2.mlp.gate_proj",
66
- "model.language_model.layers.9.mlp.gate_proj",
67
- "model.language_model.layers.19.self_attn.v_proj",
68
- "model.language_model.layers.18.self_attn.k_proj",
69
- "model.language_model.layers.19.mlp.down_proj",
70
- "model.language_model.layers.23.self_attn.o_proj",
71
- "model.language_model.layers.27.mlp.gate_proj",
72
- "model.language_model.layers.0.mlp.up_proj",
73
- "model.language_model.layers.20.mlp.gate_proj",
74
- "model.language_model.layers.28.self_attn.o_proj",
75
- "model.language_model.layers.4.self_attn.o_proj",
76
- "model.language_model.layers.28.self_attn.v_proj",
77
- "model.language_model.layers.11.self_attn.q_proj",
78
  "model.language_model.layers.26.self_attn.o_proj",
79
- "model.language_model.layers.9.mlp.down_proj",
80
- "model.language_model.layers.27.self_attn.v_proj",
81
- "model.language_model.layers.23.mlp.up_proj",
82
- "model.language_model.layers.2.mlp.up_proj",
83
- "model.language_model.layers.0.mlp.gate_proj",
84
- "model.language_model.layers.18.self_attn.o_proj",
85
- "model.language_model.layers.19.self_attn.k_proj",
86
- "model.language_model.layers.10.mlp.down_proj",
87
- "model.language_model.layers.10.mlp.gate_proj",
88
- "model.language_model.layers.0.self_attn.o_proj",
89
- "model.language_model.layers.20.mlp.down_proj",
90
- "model.language_model.layers.10.self_attn.o_proj",
91
- "model.language_model.layers.15.self_attn.o_proj",
92
- "model.language_model.layers.18.mlp.down_proj",
93
- "model.language_model.layers.1.self_attn.v_proj",
94
- "model.language_model.layers.13.self_attn.q_proj",
95
- "model.language_model.layers.18.self_attn.q_proj",
96
- "model.language_model.layers.3.mlp.down_proj",
97
- "model.language_model.layers.20.self_attn.k_proj",
98
- "model.language_model.layers.14.self_attn.o_proj",
99
- "model.language_model.layers.7.mlp.down_proj",
100
- "model.language_model.layers.25.self_attn.v_proj",
101
- "model.language_model.layers.29.mlp.gate_proj",
102
- "model.language_model.layers.2.self_attn.k_proj",
103
- "model.language_model.layers.5.self_attn.k_proj",
104
- "model.language_model.layers.9.self_attn.k_proj",
105
- "model.language_model.layers.1.mlp.gate_proj",
106
- "model.language_model.layers.8.self_attn.o_proj",
107
- "model.language_model.layers.22.self_attn.k_proj",
108
- "model.language_model.layers.3.self_attn.q_proj",
109
- "model.language_model.layers.23.self_attn.k_proj",
110
- "model.language_model.layers.3.self_attn.k_proj",
111
- "model.language_model.layers.19.self_attn.q_proj",
112
- "model.language_model.layers.18.self_attn.v_proj",
113
- "model.language_model.layers.10.mlp.up_proj",
114
- "model.language_model.layers.11.mlp.gate_proj",
115
- "model.language_model.layers.1.mlp.up_proj",
116
- "model.language_model.layers.18.mlp.gate_proj",
117
- "model.language_model.layers.8.mlp.gate_proj",
118
  "model.language_model.layers.7.mlp.gate_proj",
 
 
 
 
 
 
 
 
119
  "model.language_model.layers.8.mlp.up_proj",
120
- "model.language_model.layers.5.self_attn.q_proj",
121
- "model.language_model.layers.14.self_attn.k_proj",
122
- "model.language_model.layers.22.self_attn.q_proj",
123
- "model.language_model.layers.4.mlp.down_proj",
124
- "model.language_model.layers.22.mlp.gate_proj",
125
- "model.language_model.layers.15.self_attn.v_proj",
126
- "model.language_model.layers.21.self_attn.o_proj",
127
- "model.language_model.layers.11.self_attn.o_proj",
128
- "model.language_model.layers.20.mlp.up_proj",
129
- "model.language_model.layers.16.self_attn.q_proj",
130
- "model.language_model.layers.1.self_attn.k_proj",
131
- "model.language_model.layers.24.mlp.gate_proj",
132
- "model.language_model.layers.26.mlp.gate_proj",
133
  "model.language_model.layers.2.self_attn.q_proj",
134
- "model.language_model.layers.4.mlp.gate_proj",
 
 
 
 
 
 
 
 
135
  "model.language_model.layers.7.self_attn.q_proj",
136
- "model.language_model.layers.14.self_attn.v_proj",
137
- "model.language_model.layers.27.self_attn.q_proj",
138
- "model.language_model.layers.29.mlp.up_proj",
139
- "model.language_model.layers.28.self_attn.k_proj",
140
- "model.language_model.layers.24.self_attn.o_proj",
141
- "model.language_model.layers.26.self_attn.k_proj",
142
- "model.language_model.layers.21.mlp.down_proj",
143
- "model.language_model.layers.14.mlp.gate_proj",
144
- "model.language_model.layers.25.mlp.up_proj",
145
  "model.language_model.layers.27.mlp.down_proj",
146
- "model.language_model.layers.20.self_attn.v_proj",
147
- "model.language_model.layers.0.mlp.down_proj",
148
- "model.language_model.layers.6.self_attn.v_proj",
149
- "model.language_model.layers.4.self_attn.q_proj",
150
- "model.language_model.layers.9.self_attn.q_proj",
151
- "model.language_model.layers.0.self_attn.q_proj",
152
- "model.language_model.layers.27.mlp.up_proj",
153
- "model.language_model.layers.29.self_attn.k_proj",
154
- "model.language_model.layers.29.self_attn.q_proj",
155
- "model.language_model.layers.12.mlp.up_proj",
156
- "model.language_model.layers.6.mlp.down_proj",
157
  "model.language_model.layers.2.mlp.down_proj",
 
158
  "model.language_model.layers.6.mlp.gate_proj",
159
- "model.language_model.layers.24.self_attn.v_proj",
160
- "model.language_model.layers.4.mlp.up_proj",
161
  "model.language_model.layers.9.self_attn.o_proj",
162
- "model.language_model.layers.22.self_attn.v_proj",
163
- "model.language_model.layers.23.mlp.gate_proj",
164
- "model.language_model.layers.5.mlp.down_proj",
165
- "model.language_model.layers.13.self_attn.o_proj",
166
- "model.language_model.layers.14.mlp.up_proj",
167
- "model.language_model.layers.15.mlp.gate_proj",
168
- "model.language_model.layers.19.self_attn.o_proj",
169
- "model.language_model.layers.24.mlp.down_proj",
170
- "model.language_model.layers.21.self_attn.q_proj",
171
- "model.language_model.layers.15.mlp.up_proj",
172
- "model.language_model.layers.26.mlp.up_proj",
173
- "model.language_model.layers.26.mlp.down_proj",
174
- "model.language_model.layers.25.self_attn.o_proj",
175
  "model.language_model.layers.8.self_attn.v_proj",
176
- "model.language_model.layers.12.self_attn.o_proj",
177
- "model.language_model.layers.6.self_attn.k_proj",
178
- "model.language_model.layers.17.mlp.gate_proj",
179
- "model.language_model.layers.12.self_attn.k_proj",
180
- "model.language_model.layers.13.mlp.down_proj",
181
- "model.language_model.layers.1.mlp.down_proj",
182
  "model.language_model.layers.3.mlp.gate_proj",
183
- "model.language_model.layers.14.mlp.down_proj",
 
 
 
 
 
184
  "model.language_model.layers.9.mlp.up_proj",
185
- "model.language_model.layers.21.self_attn.k_proj",
186
- "model.language_model.layers.6.self_attn.o_proj",
187
- "model.language_model.layers.0.self_attn.v_proj",
188
- "model.language_model.layers.16.mlp.down_proj",
 
 
 
 
 
 
 
 
 
189
  "model.language_model.layers.8.self_attn.k_proj",
190
- "model.language_model.layers.12.mlp.gate_proj",
191
- "model.language_model.layers.7.self_attn.o_proj",
192
- "model.language_model.layers.18.mlp.up_proj",
193
- "model.language_model.layers.13.mlp.up_proj",
194
- "model.language_model.layers.16.mlp.up_proj",
195
- "model.language_model.layers.17.self_attn.k_proj",
196
- "model.language_model.layers.25.self_attn.k_proj",
197
- "model.language_model.layers.8.self_attn.q_proj",
198
  "model.language_model.layers.4.self_attn.v_proj",
199
- "model.language_model.layers.23.self_attn.q_proj",
200
- "model.language_model.layers.1.self_attn.o_proj",
201
- "model.language_model.layers.5.mlp.up_proj",
202
- "model.language_model.layers.13.self_attn.k_proj",
203
- "model.language_model.layers.7.self_attn.k_proj",
204
- "model.language_model.layers.22.self_attn.o_proj",
205
- "model.language_model.layers.22.mlp.up_proj",
206
- "model.language_model.layers.16.self_attn.o_proj",
207
- "model.language_model.layers.24.self_attn.q_proj",
208
- "model.language_model.layers.12.self_attn.q_proj",
209
  "model.language_model.layers.2.self_attn.v_proj",
210
- "model.language_model.layers.12.self_attn.v_proj",
 
 
 
 
 
211
  "model.language_model.layers.13.mlp.gate_proj",
212
- "model.language_model.layers.12.mlp.down_proj",
213
- "model.language_model.layers.14.self_attn.q_proj",
214
- "model.language_model.layers.26.self_attn.v_proj",
215
- "model.language_model.layers.28.mlp.up_proj",
216
- "model.language_model.layers.19.mlp.up_proj",
217
- "model.language_model.layers.16.mlp.gate_proj",
218
- "model.language_model.layers.7.self_attn.v_proj",
219
- "model.language_model.layers.25.mlp.gate_proj",
220
- "model.language_model.layers.13.self_attn.v_proj",
221
  "model.language_model.layers.20.self_attn.q_proj",
222
- "model.language_model.layers.5.mlp.gate_proj",
223
- "model.language_model.layers.1.self_attn.q_proj",
224
- "model.language_model.layers.11.mlp.down_proj",
225
- "model.language_model.layers.0.self_attn.k_proj",
226
- "model.language_model.layers.21.self_attn.v_proj",
227
- "model.language_model.layers.28.self_attn.q_proj",
228
  "model.language_model.layers.29.self_attn.o_proj",
229
  "model.language_model.layers.11.self_attn.k_proj",
230
- "model.language_model.layers.29.mlp.down_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
231
  "model.language_model.layers.7.mlp.up_proj",
 
 
 
 
 
 
 
 
 
 
232
  "model.language_model.layers.22.mlp.down_proj",
 
 
233
  "model.language_model.layers.20.self_attn.o_proj",
234
- "model.language_model.layers.3.self_attn.o_proj",
235
- "model.language_model.layers.23.mlp.down_proj",
236
- "model.language_model.layers.16.self_attn.v_proj",
237
- "model.language_model.layers.28.mlp.gate_proj"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
238
  ],
239
  "target_parameters": null,
240
  "task_type": "CAUSAL_LM",
 
24
  "megatron_core": "megatron.core",
25
  "modules_to_save": null,
26
  "peft_type": "LORA",
27
+ "peft_version": "0.18.2.dev0@7a4b07f2070162972f8c0515bc3acd19f81c0ad7",
28
  "qalora_group_size": 16,
29
  "r": 16,
30
  "rank_pattern": {},
31
  "revision": null,
32
  "target_modules": [
33
+ "model.language_model.layers.17.self_attn.q_proj",
34
+ "model.language_model.layers.7.self_attn.o_proj",
35
+ "model.language_model.layers.3.mlp.down_proj",
36
+ "model.language_model.layers.14.mlp.up_proj",
37
+ "model.language_model.layers.17.self_attn.k_proj",
38
+ "model.language_model.layers.25.self_attn.o_proj",
39
+ "model.language_model.layers.6.self_attn.q_proj",
40
+ "model.language_model.layers.5.self_attn.q_proj",
41
+ "model.language_model.layers.1.mlp.gate_proj",
42
  "model.language_model.layers.17.mlp.up_proj",
43
+ "model.language_model.layers.5.self_attn.k_proj",
44
+ "model.language_model.layers.16.self_attn.o_proj",
45
+ "model.language_model.layers.18.mlp.up_proj",
46
+ "model.language_model.layers.25.self_attn.k_proj",
47
+ "model.language_model.layers.23.mlp.down_proj",
48
+ "model.language_model.layers.27.mlp.up_proj",
49
  "model.language_model.layers.27.self_attn.k_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  "model.language_model.layers.5.self_attn.o_proj",
51
+ "model.language_model.layers.22.self_attn.k_proj",
52
+ "model.language_model.layers.1.mlp.down_proj",
53
  "model.language_model.layers.2.mlp.gate_proj",
 
 
 
 
 
 
 
 
 
 
 
 
54
  "model.language_model.layers.26.self_attn.o_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  "model.language_model.layers.7.mlp.gate_proj",
56
+ "model.language_model.layers.24.self_attn.q_proj",
57
+ "model.language_model.layers.3.self_attn.o_proj",
58
+ "model.language_model.layers.0.self_attn.q_proj",
59
+ "model.language_model.layers.21.self_attn.k_proj",
60
+ "model.language_model.layers.23.self_attn.o_proj",
61
+ "model.language_model.layers.9.self_attn.q_proj",
62
+ "model.language_model.layers.5.mlp.gate_proj",
63
+ "model.language_model.layers.10.self_attn.v_proj",
64
  "model.language_model.layers.8.mlp.up_proj",
65
+ "model.language_model.layers.26.self_attn.v_proj",
 
 
 
 
 
 
 
 
 
 
 
 
66
  "model.language_model.layers.2.self_attn.q_proj",
67
+ "model.language_model.layers.13.self_attn.o_proj",
68
+ "model.language_model.layers.7.mlp.down_proj",
69
+ "model.language_model.layers.24.mlp.down_proj",
70
+ "model.language_model.layers.6.self_attn.k_proj",
71
+ "model.language_model.layers.0.self_attn.k_proj",
72
+ "model.language_model.layers.1.mlp.up_proj",
73
+ "model.language_model.layers.28.mlp.down_proj",
74
+ "model.language_model.layers.2.self_attn.k_proj",
75
+ "model.language_model.layers.22.mlp.up_proj",
76
  "model.language_model.layers.7.self_attn.q_proj",
77
+ "model.language_model.layers.22.self_attn.q_proj",
 
 
 
 
 
 
 
 
78
  "model.language_model.layers.27.mlp.down_proj",
 
 
 
 
 
 
 
 
 
 
 
79
  "model.language_model.layers.2.mlp.down_proj",
80
+ "model.language_model.layers.19.mlp.down_proj",
81
  "model.language_model.layers.6.mlp.gate_proj",
 
 
82
  "model.language_model.layers.9.self_attn.o_proj",
83
+ "model.language_model.layers.15.mlp.down_proj",
84
+ "model.language_model.layers.4.self_attn.o_proj",
85
+ "model.language_model.layers.29.self_attn.k_proj",
86
+ "model.language_model.layers.18.self_attn.q_proj",
87
+ "model.language_model.layers.11.mlp.down_proj",
88
+ "model.language_model.layers.26.mlp.gate_proj",
89
+ "model.language_model.layers.23.mlp.up_proj",
90
+ "model.language_model.layers.0.mlp.down_proj",
 
 
 
 
 
91
  "model.language_model.layers.8.self_attn.v_proj",
92
+ "model.language_model.layers.14.self_attn.k_proj",
93
+ "model.language_model.layers.21.mlp.up_proj",
94
+ "model.language_model.layers.10.self_attn.o_proj",
95
+ "model.language_model.layers.24.mlp.gate_proj",
96
+ "model.language_model.layers.28.mlp.up_proj",
97
+ "model.language_model.layers.29.mlp.down_proj",
98
  "model.language_model.layers.3.mlp.gate_proj",
99
+ "model.language_model.layers.8.mlp.down_proj",
100
+ "model.language_model.layers.9.mlp.down_proj",
101
+ "model.language_model.layers.18.mlp.down_proj",
102
+ "model.language_model.layers.19.mlp.gate_proj",
103
+ "model.language_model.layers.26.mlp.down_proj",
104
+ "model.language_model.layers.9.self_attn.v_proj",
105
  "model.language_model.layers.9.mlp.up_proj",
106
+ "model.language_model.layers.10.self_attn.q_proj",
107
+ "model.language_model.layers.11.self_attn.q_proj",
108
+ "model.language_model.layers.18.mlp.gate_proj",
109
+ "model.language_model.layers.16.self_attn.v_proj",
110
+ "model.language_model.layers.1.self_attn.k_proj",
111
+ "model.language_model.layers.25.mlp.up_proj",
112
+ "model.language_model.layers.28.self_attn.v_proj",
113
+ "model.language_model.layers.15.mlp.gate_proj",
114
+ "model.language_model.layers.9.self_attn.k_proj",
115
+ "model.language_model.layers.27.mlp.gate_proj",
116
+ "model.language_model.layers.14.self_attn.o_proj",
117
+ "model.language_model.layers.22.mlp.gate_proj",
118
+ "model.language_model.layers.14.mlp.down_proj",
119
  "model.language_model.layers.8.self_attn.k_proj",
120
+ "model.language_model.layers.12.self_attn.o_proj",
 
 
 
 
 
 
 
121
  "model.language_model.layers.4.self_attn.v_proj",
122
+ "model.language_model.layers.10.mlp.down_proj",
123
+ "model.language_model.layers.24.mlp.up_proj",
124
+ "model.language_model.layers.25.mlp.gate_proj",
 
 
 
 
 
 
 
125
  "model.language_model.layers.2.self_attn.v_proj",
126
+ "model.language_model.layers.4.self_attn.k_proj",
127
+ "model.language_model.layers.8.self_attn.q_proj",
128
+ "model.language_model.layers.18.self_attn.v_proj",
129
+ "model.language_model.layers.27.self_attn.o_proj",
130
+ "model.language_model.layers.16.self_attn.q_proj",
131
+ "model.language_model.layers.3.mlp.up_proj",
132
  "model.language_model.layers.13.mlp.gate_proj",
133
+ "model.language_model.layers.17.mlp.down_proj",
134
+ "model.language_model.layers.28.self_attn.o_proj",
 
 
 
 
 
 
 
135
  "model.language_model.layers.20.self_attn.q_proj",
136
+ "model.language_model.layers.0.mlp.up_proj",
137
+ "model.language_model.layers.16.mlp.down_proj",
 
 
 
 
138
  "model.language_model.layers.29.self_attn.o_proj",
139
  "model.language_model.layers.11.self_attn.k_proj",
140
+ "model.language_model.layers.20.self_attn.v_proj",
141
+ "model.language_model.layers.14.self_attn.v_proj",
142
+ "model.language_model.layers.11.mlp.gate_proj",
143
+ "model.language_model.layers.21.mlp.down_proj",
144
+ "model.language_model.layers.12.mlp.up_proj",
145
+ "model.language_model.layers.10.mlp.gate_proj",
146
+ "model.language_model.layers.10.self_attn.k_proj",
147
+ "model.language_model.layers.27.self_attn.q_proj",
148
+ "model.language_model.layers.8.mlp.gate_proj",
149
+ "model.language_model.layers.19.self_attn.q_proj",
150
+ "model.language_model.layers.23.self_attn.k_proj",
151
+ "model.language_model.layers.13.self_attn.q_proj",
152
+ "model.language_model.layers.0.self_attn.v_proj",
153
+ "model.language_model.layers.8.self_attn.o_proj",
154
+ "model.language_model.layers.0.mlp.gate_proj",
155
+ "model.language_model.layers.17.mlp.gate_proj",
156
+ "model.language_model.layers.1.self_attn.o_proj",
157
+ "model.language_model.layers.14.self_attn.q_proj",
158
+ "model.language_model.layers.14.mlp.gate_proj",
159
+ "model.language_model.layers.12.mlp.down_proj",
160
+ "model.language_model.layers.21.self_attn.o_proj",
161
+ "model.language_model.layers.5.mlp.up_proj",
162
+ "model.language_model.layers.20.mlp.up_proj",
163
+ "model.language_model.layers.13.mlp.up_proj",
164
+ "model.language_model.layers.18.self_attn.k_proj",
165
+ "model.language_model.layers.23.mlp.gate_proj",
166
+ "model.language_model.layers.4.mlp.down_proj",
167
+ "model.language_model.layers.24.self_attn.o_proj",
168
+ "model.language_model.layers.28.self_attn.k_proj",
169
+ "model.language_model.layers.13.self_attn.v_proj",
170
+ "model.language_model.layers.6.mlp.down_proj",
171
+ "model.language_model.layers.13.mlp.down_proj",
172
+ "model.language_model.layers.21.self_attn.q_proj",
173
+ "model.language_model.layers.10.mlp.up_proj",
174
+ "model.language_model.layers.15.self_attn.v_proj",
175
+ "model.language_model.layers.0.self_attn.o_proj",
176
+ "model.language_model.layers.9.mlp.gate_proj",
177
+ "model.language_model.layers.16.mlp.up_proj",
178
+ "model.language_model.layers.11.self_attn.o_proj",
179
+ "model.language_model.layers.17.self_attn.o_proj",
180
+ "model.language_model.layers.20.mlp.gate_proj",
181
+ "model.language_model.layers.26.mlp.up_proj",
182
+ "model.language_model.layers.15.mlp.up_proj",
183
+ "model.language_model.layers.12.mlp.gate_proj",
184
+ "model.language_model.layers.22.self_attn.o_proj",
185
+ "model.language_model.layers.28.mlp.gate_proj",
186
+ "model.language_model.layers.21.mlp.gate_proj",
187
+ "model.language_model.layers.2.mlp.up_proj",
188
+ "model.language_model.layers.28.self_attn.q_proj",
189
+ "model.language_model.layers.29.self_attn.q_proj",
190
  "model.language_model.layers.7.mlp.up_proj",
191
+ "model.language_model.layers.15.self_attn.q_proj",
192
+ "model.language_model.layers.19.self_attn.k_proj",
193
+ "model.language_model.layers.7.self_attn.v_proj",
194
+ "model.language_model.layers.29.mlp.gate_proj",
195
+ "model.language_model.layers.24.self_attn.k_proj",
196
+ "model.language_model.layers.16.mlp.gate_proj",
197
+ "model.language_model.layers.12.self_attn.k_proj",
198
+ "model.language_model.layers.4.mlp.up_proj",
199
+ "model.language_model.layers.20.mlp.down_proj",
200
+ "model.language_model.layers.5.mlp.down_proj",
201
  "model.language_model.layers.22.mlp.down_proj",
202
+ "model.language_model.layers.3.self_attn.q_proj",
203
+ "model.language_model.layers.26.self_attn.k_proj",
204
  "model.language_model.layers.20.self_attn.o_proj",
205
+ "model.language_model.layers.24.self_attn.v_proj",
206
+ "model.language_model.layers.21.self_attn.v_proj",
207
+ "model.language_model.layers.19.self_attn.o_proj",
208
+ "model.language_model.layers.29.mlp.up_proj",
209
+ "model.language_model.layers.13.self_attn.k_proj",
210
+ "model.language_model.layers.2.self_attn.o_proj",
211
+ "model.language_model.layers.16.self_attn.k_proj",
212
+ "model.language_model.layers.22.self_attn.v_proj",
213
+ "model.language_model.layers.25.self_attn.v_proj",
214
+ "model.language_model.layers.25.mlp.down_proj",
215
+ "model.language_model.layers.4.mlp.gate_proj",
216
+ "model.language_model.layers.6.self_attn.o_proj",
217
+ "model.language_model.layers.25.self_attn.q_proj",
218
+ "model.language_model.layers.7.self_attn.k_proj",
219
+ "model.language_model.layers.11.mlp.up_proj",
220
+ "model.language_model.layers.20.self_attn.k_proj",
221
+ "model.language_model.layers.6.mlp.up_proj",
222
+ "model.language_model.layers.15.self_attn.k_proj",
223
+ "model.language_model.layers.19.mlp.up_proj",
224
+ "model.language_model.layers.12.self_attn.q_proj",
225
+ "model.language_model.layers.4.self_attn.q_proj",
226
+ "model.language_model.layers.18.self_attn.o_proj",
227
+ "model.language_model.layers.1.self_attn.v_proj",
228
+ "model.language_model.layers.15.self_attn.o_proj",
229
+ "model.language_model.layers.19.self_attn.v_proj",
230
+ "model.language_model.layers.6.self_attn.v_proj",
231
+ "model.language_model.layers.12.self_attn.v_proj",
232
+ "model.language_model.layers.3.self_attn.k_proj",
233
+ "model.language_model.layers.26.self_attn.q_proj",
234
+ "model.language_model.layers.1.self_attn.q_proj",
235
+ "model.language_model.layers.27.self_attn.v_proj",
236
+ "model.language_model.layers.3.self_attn.v_proj",
237
+ "model.language_model.layers.23.self_attn.q_proj"
238
  ],
239
  "target_parameters": null,
240
  "task_type": "CAUSAL_LM",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bd22f3389135f43d99a3e2a496b4df257c053e7777182b566f577e496b885a88
3
  size 74403016
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9d4da55f38e24757e8eb8962365a466a0ffc7d89aaac9ffdc75a3d30f2c4855
3
  size 74403016
checkpoint-121/adapter_config.json CHANGED
@@ -24,217 +24,217 @@
24
  "megatron_core": "megatron.core",
25
  "modules_to_save": null,
26
  "peft_type": "LORA",
27
- "peft_version": "0.18.2.dev0@e7355a3b2233820f6f30e558ce133ed22673a087",
28
  "qalora_group_size": 16,
29
  "r": 16,
30
  "rank_pattern": {},
31
  "revision": null,
32
  "target_modules": [
33
- "model.language_model.layers.4.self_attn.k_proj",
34
- "model.language_model.layers.17.self_attn.o_proj",
35
- "model.language_model.layers.3.mlp.up_proj",
 
 
 
 
 
 
36
  "model.language_model.layers.17.mlp.up_proj",
37
- "model.language_model.layers.8.mlp.down_proj",
 
 
 
 
 
38
  "model.language_model.layers.27.self_attn.k_proj",
39
- "model.language_model.layers.28.mlp.down_proj",
40
- "model.language_model.layers.6.mlp.up_proj",
41
- "model.language_model.layers.24.self_attn.k_proj",
42
- "model.language_model.layers.6.self_attn.q_proj",
43
- "model.language_model.layers.17.self_attn.q_proj",
44
- "model.language_model.layers.15.self_attn.k_proj",
45
- "model.language_model.layers.24.mlp.up_proj",
46
- "model.language_model.layers.19.mlp.gate_proj",
47
- "model.language_model.layers.16.self_attn.k_proj",
48
- "model.language_model.layers.26.self_attn.q_proj",
49
- "model.language_model.layers.21.mlp.up_proj",
50
- "model.language_model.layers.17.mlp.down_proj",
51
- "model.language_model.layers.10.self_attn.v_proj",
52
- "model.language_model.layers.25.mlp.down_proj",
53
- "model.language_model.layers.11.mlp.up_proj",
54
- "model.language_model.layers.2.self_attn.o_proj",
55
- "model.language_model.layers.15.mlp.down_proj",
56
- "model.language_model.layers.10.self_attn.k_proj",
57
- "model.language_model.layers.15.self_attn.q_proj",
58
- "model.language_model.layers.9.self_attn.v_proj",
59
- "model.language_model.layers.27.self_attn.o_proj",
60
- "model.language_model.layers.3.self_attn.v_proj",
61
- "model.language_model.layers.10.self_attn.q_proj",
62
- "model.language_model.layers.21.mlp.gate_proj",
63
- "model.language_model.layers.25.self_attn.q_proj",
64
  "model.language_model.layers.5.self_attn.o_proj",
 
 
65
  "model.language_model.layers.2.mlp.gate_proj",
66
- "model.language_model.layers.9.mlp.gate_proj",
67
- "model.language_model.layers.19.self_attn.v_proj",
68
- "model.language_model.layers.18.self_attn.k_proj",
69
- "model.language_model.layers.19.mlp.down_proj",
70
- "model.language_model.layers.23.self_attn.o_proj",
71
- "model.language_model.layers.27.mlp.gate_proj",
72
- "model.language_model.layers.0.mlp.up_proj",
73
- "model.language_model.layers.20.mlp.gate_proj",
74
- "model.language_model.layers.28.self_attn.o_proj",
75
- "model.language_model.layers.4.self_attn.o_proj",
76
- "model.language_model.layers.28.self_attn.v_proj",
77
- "model.language_model.layers.11.self_attn.q_proj",
78
  "model.language_model.layers.26.self_attn.o_proj",
79
- "model.language_model.layers.9.mlp.down_proj",
80
- "model.language_model.layers.27.self_attn.v_proj",
81
- "model.language_model.layers.23.mlp.up_proj",
82
- "model.language_model.layers.2.mlp.up_proj",
83
- "model.language_model.layers.0.mlp.gate_proj",
84
- "model.language_model.layers.18.self_attn.o_proj",
85
- "model.language_model.layers.19.self_attn.k_proj",
86
- "model.language_model.layers.10.mlp.down_proj",
87
- "model.language_model.layers.10.mlp.gate_proj",
88
- "model.language_model.layers.0.self_attn.o_proj",
89
- "model.language_model.layers.20.mlp.down_proj",
90
- "model.language_model.layers.10.self_attn.o_proj",
91
- "model.language_model.layers.15.self_attn.o_proj",
92
- "model.language_model.layers.18.mlp.down_proj",
93
- "model.language_model.layers.1.self_attn.v_proj",
94
- "model.language_model.layers.13.self_attn.q_proj",
95
- "model.language_model.layers.18.self_attn.q_proj",
96
- "model.language_model.layers.3.mlp.down_proj",
97
- "model.language_model.layers.20.self_attn.k_proj",
98
- "model.language_model.layers.14.self_attn.o_proj",
99
- "model.language_model.layers.7.mlp.down_proj",
100
- "model.language_model.layers.25.self_attn.v_proj",
101
- "model.language_model.layers.29.mlp.gate_proj",
102
- "model.language_model.layers.2.self_attn.k_proj",
103
- "model.language_model.layers.5.self_attn.k_proj",
104
- "model.language_model.layers.9.self_attn.k_proj",
105
- "model.language_model.layers.1.mlp.gate_proj",
106
- "model.language_model.layers.8.self_attn.o_proj",
107
- "model.language_model.layers.22.self_attn.k_proj",
108
- "model.language_model.layers.3.self_attn.q_proj",
109
- "model.language_model.layers.23.self_attn.k_proj",
110
- "model.language_model.layers.3.self_attn.k_proj",
111
- "model.language_model.layers.19.self_attn.q_proj",
112
- "model.language_model.layers.18.self_attn.v_proj",
113
- "model.language_model.layers.10.mlp.up_proj",
114
- "model.language_model.layers.11.mlp.gate_proj",
115
- "model.language_model.layers.1.mlp.up_proj",
116
- "model.language_model.layers.18.mlp.gate_proj",
117
- "model.language_model.layers.8.mlp.gate_proj",
118
  "model.language_model.layers.7.mlp.gate_proj",
 
 
 
 
 
 
 
 
119
  "model.language_model.layers.8.mlp.up_proj",
120
- "model.language_model.layers.5.self_attn.q_proj",
121
- "model.language_model.layers.14.self_attn.k_proj",
122
- "model.language_model.layers.22.self_attn.q_proj",
123
- "model.language_model.layers.4.mlp.down_proj",
124
- "model.language_model.layers.22.mlp.gate_proj",
125
- "model.language_model.layers.15.self_attn.v_proj",
126
- "model.language_model.layers.21.self_attn.o_proj",
127
- "model.language_model.layers.11.self_attn.o_proj",
128
- "model.language_model.layers.20.mlp.up_proj",
129
- "model.language_model.layers.16.self_attn.q_proj",
130
- "model.language_model.layers.1.self_attn.k_proj",
131
- "model.language_model.layers.24.mlp.gate_proj",
132
- "model.language_model.layers.26.mlp.gate_proj",
133
  "model.language_model.layers.2.self_attn.q_proj",
134
- "model.language_model.layers.4.mlp.gate_proj",
 
 
 
 
 
 
 
 
135
  "model.language_model.layers.7.self_attn.q_proj",
136
- "model.language_model.layers.14.self_attn.v_proj",
137
- "model.language_model.layers.27.self_attn.q_proj",
138
- "model.language_model.layers.29.mlp.up_proj",
139
- "model.language_model.layers.28.self_attn.k_proj",
140
- "model.language_model.layers.24.self_attn.o_proj",
141
- "model.language_model.layers.26.self_attn.k_proj",
142
- "model.language_model.layers.21.mlp.down_proj",
143
- "model.language_model.layers.14.mlp.gate_proj",
144
- "model.language_model.layers.25.mlp.up_proj",
145
  "model.language_model.layers.27.mlp.down_proj",
146
- "model.language_model.layers.20.self_attn.v_proj",
147
- "model.language_model.layers.0.mlp.down_proj",
148
- "model.language_model.layers.6.self_attn.v_proj",
149
- "model.language_model.layers.4.self_attn.q_proj",
150
- "model.language_model.layers.9.self_attn.q_proj",
151
- "model.language_model.layers.0.self_attn.q_proj",
152
- "model.language_model.layers.27.mlp.up_proj",
153
- "model.language_model.layers.29.self_attn.k_proj",
154
- "model.language_model.layers.29.self_attn.q_proj",
155
- "model.language_model.layers.12.mlp.up_proj",
156
- "model.language_model.layers.6.mlp.down_proj",
157
  "model.language_model.layers.2.mlp.down_proj",
 
158
  "model.language_model.layers.6.mlp.gate_proj",
159
- "model.language_model.layers.24.self_attn.v_proj",
160
- "model.language_model.layers.4.mlp.up_proj",
161
  "model.language_model.layers.9.self_attn.o_proj",
162
- "model.language_model.layers.22.self_attn.v_proj",
163
- "model.language_model.layers.23.mlp.gate_proj",
164
- "model.language_model.layers.5.mlp.down_proj",
165
- "model.language_model.layers.13.self_attn.o_proj",
166
- "model.language_model.layers.14.mlp.up_proj",
167
- "model.language_model.layers.15.mlp.gate_proj",
168
- "model.language_model.layers.19.self_attn.o_proj",
169
- "model.language_model.layers.24.mlp.down_proj",
170
- "model.language_model.layers.21.self_attn.q_proj",
171
- "model.language_model.layers.15.mlp.up_proj",
172
- "model.language_model.layers.26.mlp.up_proj",
173
- "model.language_model.layers.26.mlp.down_proj",
174
- "model.language_model.layers.25.self_attn.o_proj",
175
  "model.language_model.layers.8.self_attn.v_proj",
176
- "model.language_model.layers.12.self_attn.o_proj",
177
- "model.language_model.layers.6.self_attn.k_proj",
178
- "model.language_model.layers.17.mlp.gate_proj",
179
- "model.language_model.layers.12.self_attn.k_proj",
180
- "model.language_model.layers.13.mlp.down_proj",
181
- "model.language_model.layers.1.mlp.down_proj",
182
  "model.language_model.layers.3.mlp.gate_proj",
183
- "model.language_model.layers.14.mlp.down_proj",
 
 
 
 
 
184
  "model.language_model.layers.9.mlp.up_proj",
185
- "model.language_model.layers.21.self_attn.k_proj",
186
- "model.language_model.layers.6.self_attn.o_proj",
187
- "model.language_model.layers.0.self_attn.v_proj",
188
- "model.language_model.layers.16.mlp.down_proj",
 
 
 
 
 
 
 
 
 
189
  "model.language_model.layers.8.self_attn.k_proj",
190
- "model.language_model.layers.12.mlp.gate_proj",
191
- "model.language_model.layers.7.self_attn.o_proj",
192
- "model.language_model.layers.18.mlp.up_proj",
193
- "model.language_model.layers.13.mlp.up_proj",
194
- "model.language_model.layers.16.mlp.up_proj",
195
- "model.language_model.layers.17.self_attn.k_proj",
196
- "model.language_model.layers.25.self_attn.k_proj",
197
- "model.language_model.layers.8.self_attn.q_proj",
198
  "model.language_model.layers.4.self_attn.v_proj",
199
- "model.language_model.layers.23.self_attn.q_proj",
200
- "model.language_model.layers.1.self_attn.o_proj",
201
- "model.language_model.layers.5.mlp.up_proj",
202
- "model.language_model.layers.13.self_attn.k_proj",
203
- "model.language_model.layers.7.self_attn.k_proj",
204
- "model.language_model.layers.22.self_attn.o_proj",
205
- "model.language_model.layers.22.mlp.up_proj",
206
- "model.language_model.layers.16.self_attn.o_proj",
207
- "model.language_model.layers.24.self_attn.q_proj",
208
- "model.language_model.layers.12.self_attn.q_proj",
209
  "model.language_model.layers.2.self_attn.v_proj",
210
- "model.language_model.layers.12.self_attn.v_proj",
 
 
 
 
 
211
  "model.language_model.layers.13.mlp.gate_proj",
212
- "model.language_model.layers.12.mlp.down_proj",
213
- "model.language_model.layers.14.self_attn.q_proj",
214
- "model.language_model.layers.26.self_attn.v_proj",
215
- "model.language_model.layers.28.mlp.up_proj",
216
- "model.language_model.layers.19.mlp.up_proj",
217
- "model.language_model.layers.16.mlp.gate_proj",
218
- "model.language_model.layers.7.self_attn.v_proj",
219
- "model.language_model.layers.25.mlp.gate_proj",
220
- "model.language_model.layers.13.self_attn.v_proj",
221
  "model.language_model.layers.20.self_attn.q_proj",
222
- "model.language_model.layers.5.mlp.gate_proj",
223
- "model.language_model.layers.1.self_attn.q_proj",
224
- "model.language_model.layers.11.mlp.down_proj",
225
- "model.language_model.layers.0.self_attn.k_proj",
226
- "model.language_model.layers.21.self_attn.v_proj",
227
- "model.language_model.layers.28.self_attn.q_proj",
228
  "model.language_model.layers.29.self_attn.o_proj",
229
  "model.language_model.layers.11.self_attn.k_proj",
230
- "model.language_model.layers.29.mlp.down_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
231
  "model.language_model.layers.7.mlp.up_proj",
 
 
 
 
 
 
 
 
 
 
232
  "model.language_model.layers.22.mlp.down_proj",
 
 
233
  "model.language_model.layers.20.self_attn.o_proj",
234
- "model.language_model.layers.3.self_attn.o_proj",
235
- "model.language_model.layers.23.mlp.down_proj",
236
- "model.language_model.layers.16.self_attn.v_proj",
237
- "model.language_model.layers.28.mlp.gate_proj"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
238
  ],
239
  "target_parameters": null,
240
  "task_type": "CAUSAL_LM",
 
24
  "megatron_core": "megatron.core",
25
  "modules_to_save": null,
26
  "peft_type": "LORA",
27
+ "peft_version": "0.18.2.dev0@7a4b07f2070162972f8c0515bc3acd19f81c0ad7",
28
  "qalora_group_size": 16,
29
  "r": 16,
30
  "rank_pattern": {},
31
  "revision": null,
32
  "target_modules": [
33
+ "model.language_model.layers.17.self_attn.q_proj",
34
+ "model.language_model.layers.7.self_attn.o_proj",
35
+ "model.language_model.layers.3.mlp.down_proj",
36
+ "model.language_model.layers.14.mlp.up_proj",
37
+ "model.language_model.layers.17.self_attn.k_proj",
38
+ "model.language_model.layers.25.self_attn.o_proj",
39
+ "model.language_model.layers.6.self_attn.q_proj",
40
+ "model.language_model.layers.5.self_attn.q_proj",
41
+ "model.language_model.layers.1.mlp.gate_proj",
42
  "model.language_model.layers.17.mlp.up_proj",
43
+ "model.language_model.layers.5.self_attn.k_proj",
44
+ "model.language_model.layers.16.self_attn.o_proj",
45
+ "model.language_model.layers.18.mlp.up_proj",
46
+ "model.language_model.layers.25.self_attn.k_proj",
47
+ "model.language_model.layers.23.mlp.down_proj",
48
+ "model.language_model.layers.27.mlp.up_proj",
49
  "model.language_model.layers.27.self_attn.k_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  "model.language_model.layers.5.self_attn.o_proj",
51
+ "model.language_model.layers.22.self_attn.k_proj",
52
+ "model.language_model.layers.1.mlp.down_proj",
53
  "model.language_model.layers.2.mlp.gate_proj",
 
 
 
 
 
 
 
 
 
 
 
 
54
  "model.language_model.layers.26.self_attn.o_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  "model.language_model.layers.7.mlp.gate_proj",
56
+ "model.language_model.layers.24.self_attn.q_proj",
57
+ "model.language_model.layers.3.self_attn.o_proj",
58
+ "model.language_model.layers.0.self_attn.q_proj",
59
+ "model.language_model.layers.21.self_attn.k_proj",
60
+ "model.language_model.layers.23.self_attn.o_proj",
61
+ "model.language_model.layers.9.self_attn.q_proj",
62
+ "model.language_model.layers.5.mlp.gate_proj",
63
+ "model.language_model.layers.10.self_attn.v_proj",
64
  "model.language_model.layers.8.mlp.up_proj",
65
+ "model.language_model.layers.26.self_attn.v_proj",
 
 
 
 
 
 
 
 
 
 
 
 
66
  "model.language_model.layers.2.self_attn.q_proj",
67
+ "model.language_model.layers.13.self_attn.o_proj",
68
+ "model.language_model.layers.7.mlp.down_proj",
69
+ "model.language_model.layers.24.mlp.down_proj",
70
+ "model.language_model.layers.6.self_attn.k_proj",
71
+ "model.language_model.layers.0.self_attn.k_proj",
72
+ "model.language_model.layers.1.mlp.up_proj",
73
+ "model.language_model.layers.28.mlp.down_proj",
74
+ "model.language_model.layers.2.self_attn.k_proj",
75
+ "model.language_model.layers.22.mlp.up_proj",
76
  "model.language_model.layers.7.self_attn.q_proj",
77
+ "model.language_model.layers.22.self_attn.q_proj",
 
 
 
 
 
 
 
 
78
  "model.language_model.layers.27.mlp.down_proj",
 
 
 
 
 
 
 
 
 
 
 
79
  "model.language_model.layers.2.mlp.down_proj",
80
+ "model.language_model.layers.19.mlp.down_proj",
81
  "model.language_model.layers.6.mlp.gate_proj",
 
 
82
  "model.language_model.layers.9.self_attn.o_proj",
83
+ "model.language_model.layers.15.mlp.down_proj",
84
+ "model.language_model.layers.4.self_attn.o_proj",
85
+ "model.language_model.layers.29.self_attn.k_proj",
86
+ "model.language_model.layers.18.self_attn.q_proj",
87
+ "model.language_model.layers.11.mlp.down_proj",
88
+ "model.language_model.layers.26.mlp.gate_proj",
89
+ "model.language_model.layers.23.mlp.up_proj",
90
+ "model.language_model.layers.0.mlp.down_proj",
 
 
 
 
 
91
  "model.language_model.layers.8.self_attn.v_proj",
92
+ "model.language_model.layers.14.self_attn.k_proj",
93
+ "model.language_model.layers.21.mlp.up_proj",
94
+ "model.language_model.layers.10.self_attn.o_proj",
95
+ "model.language_model.layers.24.mlp.gate_proj",
96
+ "model.language_model.layers.28.mlp.up_proj",
97
+ "model.language_model.layers.29.mlp.down_proj",
98
  "model.language_model.layers.3.mlp.gate_proj",
99
+ "model.language_model.layers.8.mlp.down_proj",
100
+ "model.language_model.layers.9.mlp.down_proj",
101
+ "model.language_model.layers.18.mlp.down_proj",
102
+ "model.language_model.layers.19.mlp.gate_proj",
103
+ "model.language_model.layers.26.mlp.down_proj",
104
+ "model.language_model.layers.9.self_attn.v_proj",
105
  "model.language_model.layers.9.mlp.up_proj",
106
+ "model.language_model.layers.10.self_attn.q_proj",
107
+ "model.language_model.layers.11.self_attn.q_proj",
108
+ "model.language_model.layers.18.mlp.gate_proj",
109
+ "model.language_model.layers.16.self_attn.v_proj",
110
+ "model.language_model.layers.1.self_attn.k_proj",
111
+ "model.language_model.layers.25.mlp.up_proj",
112
+ "model.language_model.layers.28.self_attn.v_proj",
113
+ "model.language_model.layers.15.mlp.gate_proj",
114
+ "model.language_model.layers.9.self_attn.k_proj",
115
+ "model.language_model.layers.27.mlp.gate_proj",
116
+ "model.language_model.layers.14.self_attn.o_proj",
117
+ "model.language_model.layers.22.mlp.gate_proj",
118
+ "model.language_model.layers.14.mlp.down_proj",
119
  "model.language_model.layers.8.self_attn.k_proj",
120
+ "model.language_model.layers.12.self_attn.o_proj",
 
 
 
 
 
 
 
121
  "model.language_model.layers.4.self_attn.v_proj",
122
+ "model.language_model.layers.10.mlp.down_proj",
123
+ "model.language_model.layers.24.mlp.up_proj",
124
+ "model.language_model.layers.25.mlp.gate_proj",
 
 
 
 
 
 
 
125
  "model.language_model.layers.2.self_attn.v_proj",
126
+ "model.language_model.layers.4.self_attn.k_proj",
127
+ "model.language_model.layers.8.self_attn.q_proj",
128
+ "model.language_model.layers.18.self_attn.v_proj",
129
+ "model.language_model.layers.27.self_attn.o_proj",
130
+ "model.language_model.layers.16.self_attn.q_proj",
131
+ "model.language_model.layers.3.mlp.up_proj",
132
  "model.language_model.layers.13.mlp.gate_proj",
133
+ "model.language_model.layers.17.mlp.down_proj",
134
+ "model.language_model.layers.28.self_attn.o_proj",
 
 
 
 
 
 
 
135
  "model.language_model.layers.20.self_attn.q_proj",
136
+ "model.language_model.layers.0.mlp.up_proj",
137
+ "model.language_model.layers.16.mlp.down_proj",
 
 
 
 
138
  "model.language_model.layers.29.self_attn.o_proj",
139
  "model.language_model.layers.11.self_attn.k_proj",
140
+ "model.language_model.layers.20.self_attn.v_proj",
141
+ "model.language_model.layers.14.self_attn.v_proj",
142
+ "model.language_model.layers.11.mlp.gate_proj",
143
+ "model.language_model.layers.21.mlp.down_proj",
144
+ "model.language_model.layers.12.mlp.up_proj",
145
+ "model.language_model.layers.10.mlp.gate_proj",
146
+ "model.language_model.layers.10.self_attn.k_proj",
147
+ "model.language_model.layers.27.self_attn.q_proj",
148
+ "model.language_model.layers.8.mlp.gate_proj",
149
+ "model.language_model.layers.19.self_attn.q_proj",
150
+ "model.language_model.layers.23.self_attn.k_proj",
151
+ "model.language_model.layers.13.self_attn.q_proj",
152
+ "model.language_model.layers.0.self_attn.v_proj",
153
+ "model.language_model.layers.8.self_attn.o_proj",
154
+ "model.language_model.layers.0.mlp.gate_proj",
155
+ "model.language_model.layers.17.mlp.gate_proj",
156
+ "model.language_model.layers.1.self_attn.o_proj",
157
+ "model.language_model.layers.14.self_attn.q_proj",
158
+ "model.language_model.layers.14.mlp.gate_proj",
159
+ "model.language_model.layers.12.mlp.down_proj",
160
+ "model.language_model.layers.21.self_attn.o_proj",
161
+ "model.language_model.layers.5.mlp.up_proj",
162
+ "model.language_model.layers.20.mlp.up_proj",
163
+ "model.language_model.layers.13.mlp.up_proj",
164
+ "model.language_model.layers.18.self_attn.k_proj",
165
+ "model.language_model.layers.23.mlp.gate_proj",
166
+ "model.language_model.layers.4.mlp.down_proj",
167
+ "model.language_model.layers.24.self_attn.o_proj",
168
+ "model.language_model.layers.28.self_attn.k_proj",
169
+ "model.language_model.layers.13.self_attn.v_proj",
170
+ "model.language_model.layers.6.mlp.down_proj",
171
+ "model.language_model.layers.13.mlp.down_proj",
172
+ "model.language_model.layers.21.self_attn.q_proj",
173
+ "model.language_model.layers.10.mlp.up_proj",
174
+ "model.language_model.layers.15.self_attn.v_proj",
175
+ "model.language_model.layers.0.self_attn.o_proj",
176
+ "model.language_model.layers.9.mlp.gate_proj",
177
+ "model.language_model.layers.16.mlp.up_proj",
178
+ "model.language_model.layers.11.self_attn.o_proj",
179
+ "model.language_model.layers.17.self_attn.o_proj",
180
+ "model.language_model.layers.20.mlp.gate_proj",
181
+ "model.language_model.layers.26.mlp.up_proj",
182
+ "model.language_model.layers.15.mlp.up_proj",
183
+ "model.language_model.layers.12.mlp.gate_proj",
184
+ "model.language_model.layers.22.self_attn.o_proj",
185
+ "model.language_model.layers.28.mlp.gate_proj",
186
+ "model.language_model.layers.21.mlp.gate_proj",
187
+ "model.language_model.layers.2.mlp.up_proj",
188
+ "model.language_model.layers.28.self_attn.q_proj",
189
+ "model.language_model.layers.29.self_attn.q_proj",
190
  "model.language_model.layers.7.mlp.up_proj",
191
+ "model.language_model.layers.15.self_attn.q_proj",
192
+ "model.language_model.layers.19.self_attn.k_proj",
193
+ "model.language_model.layers.7.self_attn.v_proj",
194
+ "model.language_model.layers.29.mlp.gate_proj",
195
+ "model.language_model.layers.24.self_attn.k_proj",
196
+ "model.language_model.layers.16.mlp.gate_proj",
197
+ "model.language_model.layers.12.self_attn.k_proj",
198
+ "model.language_model.layers.4.mlp.up_proj",
199
+ "model.language_model.layers.20.mlp.down_proj",
200
+ "model.language_model.layers.5.mlp.down_proj",
201
  "model.language_model.layers.22.mlp.down_proj",
202
+ "model.language_model.layers.3.self_attn.q_proj",
203
+ "model.language_model.layers.26.self_attn.k_proj",
204
  "model.language_model.layers.20.self_attn.o_proj",
205
+ "model.language_model.layers.24.self_attn.v_proj",
206
+ "model.language_model.layers.21.self_attn.v_proj",
207
+ "model.language_model.layers.19.self_attn.o_proj",
208
+ "model.language_model.layers.29.mlp.up_proj",
209
+ "model.language_model.layers.13.self_attn.k_proj",
210
+ "model.language_model.layers.2.self_attn.o_proj",
211
+ "model.language_model.layers.16.self_attn.k_proj",
212
+ "model.language_model.layers.22.self_attn.v_proj",
213
+ "model.language_model.layers.25.self_attn.v_proj",
214
+ "model.language_model.layers.25.mlp.down_proj",
215
+ "model.language_model.layers.4.mlp.gate_proj",
216
+ "model.language_model.layers.6.self_attn.o_proj",
217
+ "model.language_model.layers.25.self_attn.q_proj",
218
+ "model.language_model.layers.7.self_attn.k_proj",
219
+ "model.language_model.layers.11.mlp.up_proj",
220
+ "model.language_model.layers.20.self_attn.k_proj",
221
+ "model.language_model.layers.6.mlp.up_proj",
222
+ "model.language_model.layers.15.self_attn.k_proj",
223
+ "model.language_model.layers.19.mlp.up_proj",
224
+ "model.language_model.layers.12.self_attn.q_proj",
225
+ "model.language_model.layers.4.self_attn.q_proj",
226
+ "model.language_model.layers.18.self_attn.o_proj",
227
+ "model.language_model.layers.1.self_attn.v_proj",
228
+ "model.language_model.layers.15.self_attn.o_proj",
229
+ "model.language_model.layers.19.self_attn.v_proj",
230
+ "model.language_model.layers.6.self_attn.v_proj",
231
+ "model.language_model.layers.12.self_attn.v_proj",
232
+ "model.language_model.layers.3.self_attn.k_proj",
233
+ "model.language_model.layers.26.self_attn.q_proj",
234
+ "model.language_model.layers.1.self_attn.q_proj",
235
+ "model.language_model.layers.27.self_attn.v_proj",
236
+ "model.language_model.layers.3.self_attn.v_proj",
237
+ "model.language_model.layers.23.self_attn.q_proj"
238
  ],
239
  "target_parameters": null,
240
  "task_type": "CAUSAL_LM",
checkpoint-121/adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:75c05a4dbd6f957da19a4fcd5fa768f553c1cf5fa86a54f286530b8f94bd4e89
3
  size 37232104
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1ae622f35ab2e6792a139da50e98d7ef28f5e8f09e820dbf009b9f3e8b94a0c
3
  size 37232104
checkpoint-121/optimizer.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:93915687bf2358b3909212d14e3049eb99e10df082b649515b609996d0f43a0d
3
- size 38229709
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5666cabc3b820b0ff2e713d921b0145061ad461872ec913541f9397f13205211
3
+ size 38237839
checkpoint-121/rng_state.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:959e8e9e42ca24ad2d2375468311a443a85029f21f69e8aeecbbf05f12d75103
3
  size 14645
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6bdf76518a45d84478f951ab8beaeffe8beb547d3893d1ae00c3e09ecf21c8b
3
  size 14645
checkpoint-121/scheduler.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6be25e5f07aa9f30e24f822ea2c4935aeae0eb0c636c45fdbb3d908d7c804c2b
3
  size 1465
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65cc4565c40d1efa1f6d66b589f484e98593cfb0fbda91711275a8867117b453
3
  size 1465
checkpoint-121/tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a2619fe11b50dbed06ac443c51d757b354d0b62d64baa514404d4e84e6713519
3
- size 32169780
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc8d3a0ce36466ccc1278bf987df5f71db1719b9ca6b4118264f45cb627bfe0f
3
+ size 32169626
checkpoint-121/tokenizer_config.json CHANGED
@@ -41,7 +41,7 @@
41
  "think_token": "<|think|>"
42
  },
43
  "pad_token": "<pad>",
44
- "padding_side": "left",
45
  "processor_class": "Gemma4Processor",
46
  "response_schema": {
47
  "properties": {
 
41
  "think_token": "<|think|>"
42
  },
43
  "pad_token": "<pad>",
44
+ "padding_side": "right",
45
  "processor_class": "Gemma4Processor",
46
  "response_schema": {
47
  "properties": {
checkpoint-121/trainer_state.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
  "best_global_step": 121,
3
- "best_metric": 0.6695265769958496,
4
- "best_model_checkpoint": "/home/plucky/ml-workspace/models/gemma4-26b-securecode/checkpoint-121",
5
  "epoch": 1.0,
6
  "eval_steps": 500,
7
  "global_step": 121,
@@ -10,134 +10,134 @@
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
- "entropy": 1.1113492242991925,
14
  "epoch": 0.0827300930713547,
15
- "grad_norm": 10.3125,
16
  "learning_rate": 1.8e-05,
17
- "loss": 93.48836059570313,
18
- "mean_token_accuracy": 0.4107020549476147,
19
- "num_tokens": 81920.0,
20
  "step": 10
21
  },
22
  {
23
- "entropy": 0.8875315530225635,
24
  "epoch": 0.1654601861427094,
25
- "grad_norm": 6.15625,
26
  "learning_rate": 3.8e-05,
27
- "loss": 67.76697998046875,
28
- "mean_token_accuracy": 0.5182974558323622,
29
- "num_tokens": 163840.0,
30
  "step": 20
31
  },
32
  {
33
- "entropy": 0.673606987670064,
34
  "epoch": 0.2481902792140641,
35
- "grad_norm": 2.421875,
36
  "learning_rate": 5.8e-05,
37
- "loss": 37.221334838867186,
38
- "mean_token_accuracy": 0.6476027386263012,
39
- "num_tokens": 245760.0,
40
  "step": 30
41
  },
42
  {
43
- "entropy": 1.0845661748200655,
44
  "epoch": 0.3309203722854188,
45
- "grad_norm": 1.3671875,
46
  "learning_rate": 7.800000000000001e-05,
47
- "loss": 22.017848205566406,
48
- "mean_token_accuracy": 0.7083170266821981,
49
- "num_tokens": 327680.0,
50
  "step": 40
51
  },
52
  {
53
- "entropy": 1.1636322166770696,
54
  "epoch": 0.4136504653567735,
55
- "grad_norm": 0.703125,
56
  "learning_rate": 9.8e-05,
57
- "loss": 17.47879638671875,
58
- "mean_token_accuracy": 0.7332558700814843,
59
- "num_tokens": 409600.0,
60
  "step": 50
61
  },
62
  {
63
- "entropy": 0.9551631901413202,
64
  "epoch": 0.4963805584281282,
65
- "grad_norm": 0.40625,
66
  "learning_rate": 0.000118,
67
- "loss": 15.09481201171875,
68
- "mean_token_accuracy": 0.7555772982537746,
69
- "num_tokens": 491520.0,
70
  "step": 60
71
  },
72
  {
73
- "entropy": 0.8048430571332574,
74
  "epoch": 0.5791106514994829,
75
- "grad_norm": 0.375,
76
  "learning_rate": 0.000138,
77
- "loss": 13.297686767578124,
78
- "mean_token_accuracy": 0.7774828754365444,
79
- "num_tokens": 573440.0,
80
  "step": 70
81
  },
82
  {
83
- "entropy": 0.8100443260744215,
84
  "epoch": 0.6618407445708376,
85
- "grad_norm": 0.4609375,
86
  "learning_rate": 0.00015800000000000002,
87
- "loss": 12.752572631835937,
88
- "mean_token_accuracy": 0.7837084107100963,
89
- "num_tokens": 655360.0,
90
  "step": 80
91
  },
92
  {
93
- "entropy": 0.7172152267768979,
94
  "epoch": 0.7445708376421923,
95
- "grad_norm": 2.1875,
96
  "learning_rate": 0.00017800000000000002,
97
- "loss": 11.629959106445312,
98
- "mean_token_accuracy": 0.799449609220028,
99
- "num_tokens": 737280.0,
100
  "step": 90
101
  },
102
  {
103
- "entropy": 0.7284062243998051,
104
  "epoch": 0.827300930713547,
105
- "grad_norm": 0.40625,
106
  "learning_rate": 0.00019800000000000002,
107
- "loss": 11.506278991699219,
108
- "mean_token_accuracy": 0.8022871781140566,
109
- "num_tokens": 819200.0,
110
  "step": 100
111
  },
112
  {
113
- "entropy": 0.6922262106090784,
114
  "epoch": 0.9100310237849017,
115
- "grad_norm": 0.341796875,
116
  "learning_rate": 0.00019942266891397815,
117
- "loss": 11.149666595458985,
118
- "mean_token_accuracy": 0.8068982377648354,
119
- "num_tokens": 901120.0,
120
  "step": 110
121
  },
122
  {
123
- "entropy": 0.6608987387269736,
124
  "epoch": 0.9927611168562565,
125
- "grad_norm": 0.373046875,
126
  "learning_rate": 0.00019743551343638324,
127
- "loss": 10.666960906982421,
128
- "mean_token_accuracy": 0.8124388422816992,
129
- "num_tokens": 983040.0,
130
  "step": 120
131
  },
132
  {
133
  "epoch": 1.0,
134
- "eval_entropy": 0.6862195637336997,
135
- "eval_loss": 0.6695265769958496,
136
- "eval_mean_token_accuracy": 0.8135074851124786,
137
- "eval_num_tokens": 990208.0,
138
- "eval_runtime": 255.0413,
139
- "eval_samples_per_second": 0.843,
140
- "eval_steps_per_second": 0.843,
141
  "step": 121
142
  }
143
  ],
@@ -158,7 +158,7 @@
158
  "attributes": {}
159
  }
160
  },
161
- "total_flos": 1.4904406021973606e+17,
162
  "train_batch_size": 1,
163
  "trial_name": null,
164
  "trial_params": null
 
1
  {
2
  "best_global_step": 121,
3
+ "best_metric": 0.4985087513923645,
4
+ "best_model_checkpoint": "/workspace/gemma4-26b-securecode/checkpoint-121",
5
  "epoch": 1.0,
6
  "eval_steps": 500,
7
  "global_step": 121,
 
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
+ "entropy": 1.0907821020111441,
14
  "epoch": 0.0827300930713547,
15
+ "grad_norm": 20.875,
16
  "learning_rate": 1.8e-05,
17
+ "loss": 80.26775512695312,
18
+ "mean_token_accuracy": 0.4542873948812485,
19
+ "num_tokens": 326185.0,
20
  "step": 10
21
  },
22
  {
23
+ "entropy": 0.8271314173936843,
24
  "epoch": 0.1654601861427094,
25
+ "grad_norm": 8.75,
26
  "learning_rate": 3.8e-05,
27
+ "loss": 58.08096923828125,
28
+ "mean_token_accuracy": 0.5611657274886965,
29
+ "num_tokens": 653865.0,
30
  "step": 20
31
  },
32
  {
33
+ "entropy": 0.4787554959766567,
34
  "epoch": 0.2481902792140641,
35
+ "grad_norm": 1.7109375,
36
  "learning_rate": 5.8e-05,
37
+ "loss": 25.493240356445312,
38
+ "mean_token_accuracy": 0.7378443486988544,
39
+ "num_tokens": 981337.0,
40
  "step": 30
41
  },
42
  {
43
+ "entropy": 0.7855595085769892,
44
  "epoch": 0.3309203722854188,
45
+ "grad_norm": 0.8671875,
46
  "learning_rate": 7.800000000000001e-05,
47
+ "loss": 14.629072570800782,
48
+ "mean_token_accuracy": 0.7917733617126942,
49
+ "num_tokens": 1308584.0,
50
  "step": 40
51
  },
52
  {
53
+ "entropy": 0.7569877350702882,
54
  "epoch": 0.4136504653567735,
55
+ "grad_norm": 2.109375,
56
  "learning_rate": 9.8e-05,
57
+ "loss": 12.609142303466797,
58
+ "mean_token_accuracy": 0.8013272784650326,
59
+ "num_tokens": 1635098.0,
60
  "step": 50
61
  },
62
  {
63
+ "entropy": 0.6735223602503538,
64
  "epoch": 0.4963805584281282,
65
+ "grad_norm": 16.875,
66
  "learning_rate": 0.000118,
67
+ "loss": 10.704925537109375,
68
+ "mean_token_accuracy": 0.8209844313561916,
69
+ "num_tokens": 1962302.0,
70
  "step": 60
71
  },
72
  {
73
+ "entropy": 0.6005677949637175,
74
  "epoch": 0.5791106514994829,
75
+ "grad_norm": 1.546875,
76
  "learning_rate": 0.000138,
77
+ "loss": 9.783185577392578,
78
+ "mean_token_accuracy": 0.8308866504579783,
79
+ "num_tokens": 2289982.0,
80
  "step": 70
81
  },
82
  {
83
+ "entropy": 0.5877057909965515,
84
  "epoch": 0.6618407445708376,
85
+ "grad_norm": 11.25,
86
  "learning_rate": 0.00015800000000000002,
87
+ "loss": 9.298844909667968,
88
+ "mean_token_accuracy": 0.8359990835189819,
89
+ "num_tokens": 2616786.0,
90
  "step": 80
91
  },
92
  {
93
+ "entropy": 0.5447238819673658,
94
  "epoch": 0.7445708376421923,
95
+ "grad_norm": 1.2890625,
96
  "learning_rate": 0.00017800000000000002,
97
+ "loss": 8.777264404296876,
98
+ "mean_token_accuracy": 0.8440194871276617,
99
+ "num_tokens": 2941975.0,
100
  "step": 90
101
  },
102
  {
103
+ "entropy": 0.5323287105187774,
104
  "epoch": 0.827300930713547,
105
+ "grad_norm": 0.70703125,
106
  "learning_rate": 0.00019800000000000002,
107
+ "loss": 8.489185333251953,
108
+ "mean_token_accuracy": 0.8486687760800123,
109
+ "num_tokens": 3269655.0,
110
  "step": 100
111
  },
112
  {
113
+ "entropy": 0.4949887519702315,
114
  "epoch": 0.9100310237849017,
115
+ "grad_norm": 0.439453125,
116
  "learning_rate": 0.00019942266891397815,
117
+ "loss": 8.192723083496094,
118
+ "mean_token_accuracy": 0.8528529018163681,
119
+ "num_tokens": 3595193.0,
120
  "step": 110
121
  },
122
  {
123
+ "entropy": 0.4980895221233368,
124
  "epoch": 0.9927611168562565,
125
+ "grad_norm": 0.921875,
126
  "learning_rate": 0.00019743551343638324,
127
+ "loss": 7.908926391601563,
128
+ "mean_token_accuracy": 0.8567473825067282,
129
+ "num_tokens": 3922475.0,
130
  "step": 120
131
  },
132
  {
133
  "epoch": 1.0,
134
+ "eval_entropy": 0.5629051625728607,
135
+ "eval_loss": 0.4985087513923645,
136
+ "eval_mean_token_accuracy": 0.8571729124978531,
137
+ "eval_num_tokens": 3949618.0,
138
+ "eval_runtime": 122.3216,
139
+ "eval_samples_per_second": 1.758,
140
+ "eval_steps_per_second": 1.758,
141
  "step": 121
142
  }
143
  ],
 
158
  "attributes": {}
159
  }
160
  },
161
+ "total_flos": 5.944883327916494e+17,
162
  "train_batch_size": 1,
163
  "trial_name": null,
164
  "trial_params": null
checkpoint-121/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2eaf4e1eba101412810b250e27914b2df87f93b0a9c62028451f50813e692b8e
3
  size 5713
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bdf412780aad6b6bc055248dd1640e0d1a2282e1c11f28390eac7fae5fae303
3
  size 5713
checkpoint-242/adapter_config.json CHANGED
@@ -24,217 +24,217 @@
24
  "megatron_core": "megatron.core",
25
  "modules_to_save": null,
26
  "peft_type": "LORA",
27
- "peft_version": "0.18.2.dev0@e7355a3b2233820f6f30e558ce133ed22673a087",
28
  "qalora_group_size": 16,
29
  "r": 16,
30
  "rank_pattern": {},
31
  "revision": null,
32
  "target_modules": [
33
- "model.language_model.layers.4.self_attn.k_proj",
34
- "model.language_model.layers.17.self_attn.o_proj",
35
- "model.language_model.layers.3.mlp.up_proj",
 
 
 
 
 
 
36
  "model.language_model.layers.17.mlp.up_proj",
37
- "model.language_model.layers.8.mlp.down_proj",
 
 
 
 
 
38
  "model.language_model.layers.27.self_attn.k_proj",
39
- "model.language_model.layers.28.mlp.down_proj",
40
- "model.language_model.layers.6.mlp.up_proj",
41
- "model.language_model.layers.24.self_attn.k_proj",
42
- "model.language_model.layers.6.self_attn.q_proj",
43
- "model.language_model.layers.17.self_attn.q_proj",
44
- "model.language_model.layers.15.self_attn.k_proj",
45
- "model.language_model.layers.24.mlp.up_proj",
46
- "model.language_model.layers.19.mlp.gate_proj",
47
- "model.language_model.layers.16.self_attn.k_proj",
48
- "model.language_model.layers.26.self_attn.q_proj",
49
- "model.language_model.layers.21.mlp.up_proj",
50
- "model.language_model.layers.17.mlp.down_proj",
51
- "model.language_model.layers.10.self_attn.v_proj",
52
- "model.language_model.layers.25.mlp.down_proj",
53
- "model.language_model.layers.11.mlp.up_proj",
54
- "model.language_model.layers.2.self_attn.o_proj",
55
- "model.language_model.layers.15.mlp.down_proj",
56
- "model.language_model.layers.10.self_attn.k_proj",
57
- "model.language_model.layers.15.self_attn.q_proj",
58
- "model.language_model.layers.9.self_attn.v_proj",
59
- "model.language_model.layers.27.self_attn.o_proj",
60
- "model.language_model.layers.3.self_attn.v_proj",
61
- "model.language_model.layers.10.self_attn.q_proj",
62
- "model.language_model.layers.21.mlp.gate_proj",
63
- "model.language_model.layers.25.self_attn.q_proj",
64
  "model.language_model.layers.5.self_attn.o_proj",
 
 
65
  "model.language_model.layers.2.mlp.gate_proj",
66
- "model.language_model.layers.9.mlp.gate_proj",
67
- "model.language_model.layers.19.self_attn.v_proj",
68
- "model.language_model.layers.18.self_attn.k_proj",
69
- "model.language_model.layers.19.mlp.down_proj",
70
- "model.language_model.layers.23.self_attn.o_proj",
71
- "model.language_model.layers.27.mlp.gate_proj",
72
- "model.language_model.layers.0.mlp.up_proj",
73
- "model.language_model.layers.20.mlp.gate_proj",
74
- "model.language_model.layers.28.self_attn.o_proj",
75
- "model.language_model.layers.4.self_attn.o_proj",
76
- "model.language_model.layers.28.self_attn.v_proj",
77
- "model.language_model.layers.11.self_attn.q_proj",
78
  "model.language_model.layers.26.self_attn.o_proj",
79
- "model.language_model.layers.9.mlp.down_proj",
80
- "model.language_model.layers.27.self_attn.v_proj",
81
- "model.language_model.layers.23.mlp.up_proj",
82
- "model.language_model.layers.2.mlp.up_proj",
83
- "model.language_model.layers.0.mlp.gate_proj",
84
- "model.language_model.layers.18.self_attn.o_proj",
85
- "model.language_model.layers.19.self_attn.k_proj",
86
- "model.language_model.layers.10.mlp.down_proj",
87
- "model.language_model.layers.10.mlp.gate_proj",
88
- "model.language_model.layers.0.self_attn.o_proj",
89
- "model.language_model.layers.20.mlp.down_proj",
90
- "model.language_model.layers.10.self_attn.o_proj",
91
- "model.language_model.layers.15.self_attn.o_proj",
92
- "model.language_model.layers.18.mlp.down_proj",
93
- "model.language_model.layers.1.self_attn.v_proj",
94
- "model.language_model.layers.13.self_attn.q_proj",
95
- "model.language_model.layers.18.self_attn.q_proj",
96
- "model.language_model.layers.3.mlp.down_proj",
97
- "model.language_model.layers.20.self_attn.k_proj",
98
- "model.language_model.layers.14.self_attn.o_proj",
99
- "model.language_model.layers.7.mlp.down_proj",
100
- "model.language_model.layers.25.self_attn.v_proj",
101
- "model.language_model.layers.29.mlp.gate_proj",
102
- "model.language_model.layers.2.self_attn.k_proj",
103
- "model.language_model.layers.5.self_attn.k_proj",
104
- "model.language_model.layers.9.self_attn.k_proj",
105
- "model.language_model.layers.1.mlp.gate_proj",
106
- "model.language_model.layers.8.self_attn.o_proj",
107
- "model.language_model.layers.22.self_attn.k_proj",
108
- "model.language_model.layers.3.self_attn.q_proj",
109
- "model.language_model.layers.23.self_attn.k_proj",
110
- "model.language_model.layers.3.self_attn.k_proj",
111
- "model.language_model.layers.19.self_attn.q_proj",
112
- "model.language_model.layers.18.self_attn.v_proj",
113
- "model.language_model.layers.10.mlp.up_proj",
114
- "model.language_model.layers.11.mlp.gate_proj",
115
- "model.language_model.layers.1.mlp.up_proj",
116
- "model.language_model.layers.18.mlp.gate_proj",
117
- "model.language_model.layers.8.mlp.gate_proj",
118
  "model.language_model.layers.7.mlp.gate_proj",
 
 
 
 
 
 
 
 
119
  "model.language_model.layers.8.mlp.up_proj",
120
- "model.language_model.layers.5.self_attn.q_proj",
121
- "model.language_model.layers.14.self_attn.k_proj",
122
- "model.language_model.layers.22.self_attn.q_proj",
123
- "model.language_model.layers.4.mlp.down_proj",
124
- "model.language_model.layers.22.mlp.gate_proj",
125
- "model.language_model.layers.15.self_attn.v_proj",
126
- "model.language_model.layers.21.self_attn.o_proj",
127
- "model.language_model.layers.11.self_attn.o_proj",
128
- "model.language_model.layers.20.mlp.up_proj",
129
- "model.language_model.layers.16.self_attn.q_proj",
130
- "model.language_model.layers.1.self_attn.k_proj",
131
- "model.language_model.layers.24.mlp.gate_proj",
132
- "model.language_model.layers.26.mlp.gate_proj",
133
  "model.language_model.layers.2.self_attn.q_proj",
134
- "model.language_model.layers.4.mlp.gate_proj",
 
 
 
 
 
 
 
 
135
  "model.language_model.layers.7.self_attn.q_proj",
136
- "model.language_model.layers.14.self_attn.v_proj",
137
- "model.language_model.layers.27.self_attn.q_proj",
138
- "model.language_model.layers.29.mlp.up_proj",
139
- "model.language_model.layers.28.self_attn.k_proj",
140
- "model.language_model.layers.24.self_attn.o_proj",
141
- "model.language_model.layers.26.self_attn.k_proj",
142
- "model.language_model.layers.21.mlp.down_proj",
143
- "model.language_model.layers.14.mlp.gate_proj",
144
- "model.language_model.layers.25.mlp.up_proj",
145
  "model.language_model.layers.27.mlp.down_proj",
146
- "model.language_model.layers.20.self_attn.v_proj",
147
- "model.language_model.layers.0.mlp.down_proj",
148
- "model.language_model.layers.6.self_attn.v_proj",
149
- "model.language_model.layers.4.self_attn.q_proj",
150
- "model.language_model.layers.9.self_attn.q_proj",
151
- "model.language_model.layers.0.self_attn.q_proj",
152
- "model.language_model.layers.27.mlp.up_proj",
153
- "model.language_model.layers.29.self_attn.k_proj",
154
- "model.language_model.layers.29.self_attn.q_proj",
155
- "model.language_model.layers.12.mlp.up_proj",
156
- "model.language_model.layers.6.mlp.down_proj",
157
  "model.language_model.layers.2.mlp.down_proj",
 
158
  "model.language_model.layers.6.mlp.gate_proj",
159
- "model.language_model.layers.24.self_attn.v_proj",
160
- "model.language_model.layers.4.mlp.up_proj",
161
  "model.language_model.layers.9.self_attn.o_proj",
162
- "model.language_model.layers.22.self_attn.v_proj",
163
- "model.language_model.layers.23.mlp.gate_proj",
164
- "model.language_model.layers.5.mlp.down_proj",
165
- "model.language_model.layers.13.self_attn.o_proj",
166
- "model.language_model.layers.14.mlp.up_proj",
167
- "model.language_model.layers.15.mlp.gate_proj",
168
- "model.language_model.layers.19.self_attn.o_proj",
169
- "model.language_model.layers.24.mlp.down_proj",
170
- "model.language_model.layers.21.self_attn.q_proj",
171
- "model.language_model.layers.15.mlp.up_proj",
172
- "model.language_model.layers.26.mlp.up_proj",
173
- "model.language_model.layers.26.mlp.down_proj",
174
- "model.language_model.layers.25.self_attn.o_proj",
175
  "model.language_model.layers.8.self_attn.v_proj",
176
- "model.language_model.layers.12.self_attn.o_proj",
177
- "model.language_model.layers.6.self_attn.k_proj",
178
- "model.language_model.layers.17.mlp.gate_proj",
179
- "model.language_model.layers.12.self_attn.k_proj",
180
- "model.language_model.layers.13.mlp.down_proj",
181
- "model.language_model.layers.1.mlp.down_proj",
182
  "model.language_model.layers.3.mlp.gate_proj",
183
- "model.language_model.layers.14.mlp.down_proj",
 
 
 
 
 
184
  "model.language_model.layers.9.mlp.up_proj",
185
- "model.language_model.layers.21.self_attn.k_proj",
186
- "model.language_model.layers.6.self_attn.o_proj",
187
- "model.language_model.layers.0.self_attn.v_proj",
188
- "model.language_model.layers.16.mlp.down_proj",
 
 
 
 
 
 
 
 
 
189
  "model.language_model.layers.8.self_attn.k_proj",
190
- "model.language_model.layers.12.mlp.gate_proj",
191
- "model.language_model.layers.7.self_attn.o_proj",
192
- "model.language_model.layers.18.mlp.up_proj",
193
- "model.language_model.layers.13.mlp.up_proj",
194
- "model.language_model.layers.16.mlp.up_proj",
195
- "model.language_model.layers.17.self_attn.k_proj",
196
- "model.language_model.layers.25.self_attn.k_proj",
197
- "model.language_model.layers.8.self_attn.q_proj",
198
  "model.language_model.layers.4.self_attn.v_proj",
199
- "model.language_model.layers.23.self_attn.q_proj",
200
- "model.language_model.layers.1.self_attn.o_proj",
201
- "model.language_model.layers.5.mlp.up_proj",
202
- "model.language_model.layers.13.self_attn.k_proj",
203
- "model.language_model.layers.7.self_attn.k_proj",
204
- "model.language_model.layers.22.self_attn.o_proj",
205
- "model.language_model.layers.22.mlp.up_proj",
206
- "model.language_model.layers.16.self_attn.o_proj",
207
- "model.language_model.layers.24.self_attn.q_proj",
208
- "model.language_model.layers.12.self_attn.q_proj",
209
  "model.language_model.layers.2.self_attn.v_proj",
210
- "model.language_model.layers.12.self_attn.v_proj",
 
 
 
 
 
211
  "model.language_model.layers.13.mlp.gate_proj",
212
- "model.language_model.layers.12.mlp.down_proj",
213
- "model.language_model.layers.14.self_attn.q_proj",
214
- "model.language_model.layers.26.self_attn.v_proj",
215
- "model.language_model.layers.28.mlp.up_proj",
216
- "model.language_model.layers.19.mlp.up_proj",
217
- "model.language_model.layers.16.mlp.gate_proj",
218
- "model.language_model.layers.7.self_attn.v_proj",
219
- "model.language_model.layers.25.mlp.gate_proj",
220
- "model.language_model.layers.13.self_attn.v_proj",
221
  "model.language_model.layers.20.self_attn.q_proj",
222
- "model.language_model.layers.5.mlp.gate_proj",
223
- "model.language_model.layers.1.self_attn.q_proj",
224
- "model.language_model.layers.11.mlp.down_proj",
225
- "model.language_model.layers.0.self_attn.k_proj",
226
- "model.language_model.layers.21.self_attn.v_proj",
227
- "model.language_model.layers.28.self_attn.q_proj",
228
  "model.language_model.layers.29.self_attn.o_proj",
229
  "model.language_model.layers.11.self_attn.k_proj",
230
- "model.language_model.layers.29.mlp.down_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
231
  "model.language_model.layers.7.mlp.up_proj",
 
 
 
 
 
 
 
 
 
 
232
  "model.language_model.layers.22.mlp.down_proj",
 
 
233
  "model.language_model.layers.20.self_attn.o_proj",
234
- "model.language_model.layers.3.self_attn.o_proj",
235
- "model.language_model.layers.23.mlp.down_proj",
236
- "model.language_model.layers.16.self_attn.v_proj",
237
- "model.language_model.layers.28.mlp.gate_proj"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
238
  ],
239
  "target_parameters": null,
240
  "task_type": "CAUSAL_LM",
 
24
  "megatron_core": "megatron.core",
25
  "modules_to_save": null,
26
  "peft_type": "LORA",
27
+ "peft_version": "0.18.2.dev0@7a4b07f2070162972f8c0515bc3acd19f81c0ad7",
28
  "qalora_group_size": 16,
29
  "r": 16,
30
  "rank_pattern": {},
31
  "revision": null,
32
  "target_modules": [
33
+ "model.language_model.layers.17.self_attn.q_proj",
34
+ "model.language_model.layers.7.self_attn.o_proj",
35
+ "model.language_model.layers.3.mlp.down_proj",
36
+ "model.language_model.layers.14.mlp.up_proj",
37
+ "model.language_model.layers.17.self_attn.k_proj",
38
+ "model.language_model.layers.25.self_attn.o_proj",
39
+ "model.language_model.layers.6.self_attn.q_proj",
40
+ "model.language_model.layers.5.self_attn.q_proj",
41
+ "model.language_model.layers.1.mlp.gate_proj",
42
  "model.language_model.layers.17.mlp.up_proj",
43
+ "model.language_model.layers.5.self_attn.k_proj",
44
+ "model.language_model.layers.16.self_attn.o_proj",
45
+ "model.language_model.layers.18.mlp.up_proj",
46
+ "model.language_model.layers.25.self_attn.k_proj",
47
+ "model.language_model.layers.23.mlp.down_proj",
48
+ "model.language_model.layers.27.mlp.up_proj",
49
  "model.language_model.layers.27.self_attn.k_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  "model.language_model.layers.5.self_attn.o_proj",
51
+ "model.language_model.layers.22.self_attn.k_proj",
52
+ "model.language_model.layers.1.mlp.down_proj",
53
  "model.language_model.layers.2.mlp.gate_proj",
 
 
 
 
 
 
 
 
 
 
 
 
54
  "model.language_model.layers.26.self_attn.o_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  "model.language_model.layers.7.mlp.gate_proj",
56
+ "model.language_model.layers.24.self_attn.q_proj",
57
+ "model.language_model.layers.3.self_attn.o_proj",
58
+ "model.language_model.layers.0.self_attn.q_proj",
59
+ "model.language_model.layers.21.self_attn.k_proj",
60
+ "model.language_model.layers.23.self_attn.o_proj",
61
+ "model.language_model.layers.9.self_attn.q_proj",
62
+ "model.language_model.layers.5.mlp.gate_proj",
63
+ "model.language_model.layers.10.self_attn.v_proj",
64
  "model.language_model.layers.8.mlp.up_proj",
65
+ "model.language_model.layers.26.self_attn.v_proj",
 
 
 
 
 
 
 
 
 
 
 
 
66
  "model.language_model.layers.2.self_attn.q_proj",
67
+ "model.language_model.layers.13.self_attn.o_proj",
68
+ "model.language_model.layers.7.mlp.down_proj",
69
+ "model.language_model.layers.24.mlp.down_proj",
70
+ "model.language_model.layers.6.self_attn.k_proj",
71
+ "model.language_model.layers.0.self_attn.k_proj",
72
+ "model.language_model.layers.1.mlp.up_proj",
73
+ "model.language_model.layers.28.mlp.down_proj",
74
+ "model.language_model.layers.2.self_attn.k_proj",
75
+ "model.language_model.layers.22.mlp.up_proj",
76
  "model.language_model.layers.7.self_attn.q_proj",
77
+ "model.language_model.layers.22.self_attn.q_proj",
 
 
 
 
 
 
 
 
78
  "model.language_model.layers.27.mlp.down_proj",
 
 
 
 
 
 
 
 
 
 
 
79
  "model.language_model.layers.2.mlp.down_proj",
80
+ "model.language_model.layers.19.mlp.down_proj",
81
  "model.language_model.layers.6.mlp.gate_proj",
 
 
82
  "model.language_model.layers.9.self_attn.o_proj",
83
+ "model.language_model.layers.15.mlp.down_proj",
84
+ "model.language_model.layers.4.self_attn.o_proj",
85
+ "model.language_model.layers.29.self_attn.k_proj",
86
+ "model.language_model.layers.18.self_attn.q_proj",
87
+ "model.language_model.layers.11.mlp.down_proj",
88
+ "model.language_model.layers.26.mlp.gate_proj",
89
+ "model.language_model.layers.23.mlp.up_proj",
90
+ "model.language_model.layers.0.mlp.down_proj",
 
 
 
 
 
91
  "model.language_model.layers.8.self_attn.v_proj",
92
+ "model.language_model.layers.14.self_attn.k_proj",
93
+ "model.language_model.layers.21.mlp.up_proj",
94
+ "model.language_model.layers.10.self_attn.o_proj",
95
+ "model.language_model.layers.24.mlp.gate_proj",
96
+ "model.language_model.layers.28.mlp.up_proj",
97
+ "model.language_model.layers.29.mlp.down_proj",
98
  "model.language_model.layers.3.mlp.gate_proj",
99
+ "model.language_model.layers.8.mlp.down_proj",
100
+ "model.language_model.layers.9.mlp.down_proj",
101
+ "model.language_model.layers.18.mlp.down_proj",
102
+ "model.language_model.layers.19.mlp.gate_proj",
103
+ "model.language_model.layers.26.mlp.down_proj",
104
+ "model.language_model.layers.9.self_attn.v_proj",
105
  "model.language_model.layers.9.mlp.up_proj",
106
+ "model.language_model.layers.10.self_attn.q_proj",
107
+ "model.language_model.layers.11.self_attn.q_proj",
108
+ "model.language_model.layers.18.mlp.gate_proj",
109
+ "model.language_model.layers.16.self_attn.v_proj",
110
+ "model.language_model.layers.1.self_attn.k_proj",
111
+ "model.language_model.layers.25.mlp.up_proj",
112
+ "model.language_model.layers.28.self_attn.v_proj",
113
+ "model.language_model.layers.15.mlp.gate_proj",
114
+ "model.language_model.layers.9.self_attn.k_proj",
115
+ "model.language_model.layers.27.mlp.gate_proj",
116
+ "model.language_model.layers.14.self_attn.o_proj",
117
+ "model.language_model.layers.22.mlp.gate_proj",
118
+ "model.language_model.layers.14.mlp.down_proj",
119
  "model.language_model.layers.8.self_attn.k_proj",
120
+ "model.language_model.layers.12.self_attn.o_proj",
 
 
 
 
 
 
 
121
  "model.language_model.layers.4.self_attn.v_proj",
122
+ "model.language_model.layers.10.mlp.down_proj",
123
+ "model.language_model.layers.24.mlp.up_proj",
124
+ "model.language_model.layers.25.mlp.gate_proj",
 
 
 
 
 
 
 
125
  "model.language_model.layers.2.self_attn.v_proj",
126
+ "model.language_model.layers.4.self_attn.k_proj",
127
+ "model.language_model.layers.8.self_attn.q_proj",
128
+ "model.language_model.layers.18.self_attn.v_proj",
129
+ "model.language_model.layers.27.self_attn.o_proj",
130
+ "model.language_model.layers.16.self_attn.q_proj",
131
+ "model.language_model.layers.3.mlp.up_proj",
132
  "model.language_model.layers.13.mlp.gate_proj",
133
+ "model.language_model.layers.17.mlp.down_proj",
134
+ "model.language_model.layers.28.self_attn.o_proj",
 
 
 
 
 
 
 
135
  "model.language_model.layers.20.self_attn.q_proj",
136
+ "model.language_model.layers.0.mlp.up_proj",
137
+ "model.language_model.layers.16.mlp.down_proj",
 
 
 
 
138
  "model.language_model.layers.29.self_attn.o_proj",
139
  "model.language_model.layers.11.self_attn.k_proj",
140
+ "model.language_model.layers.20.self_attn.v_proj",
141
+ "model.language_model.layers.14.self_attn.v_proj",
142
+ "model.language_model.layers.11.mlp.gate_proj",
143
+ "model.language_model.layers.21.mlp.down_proj",
144
+ "model.language_model.layers.12.mlp.up_proj",
145
+ "model.language_model.layers.10.mlp.gate_proj",
146
+ "model.language_model.layers.10.self_attn.k_proj",
147
+ "model.language_model.layers.27.self_attn.q_proj",
148
+ "model.language_model.layers.8.mlp.gate_proj",
149
+ "model.language_model.layers.19.self_attn.q_proj",
150
+ "model.language_model.layers.23.self_attn.k_proj",
151
+ "model.language_model.layers.13.self_attn.q_proj",
152
+ "model.language_model.layers.0.self_attn.v_proj",
153
+ "model.language_model.layers.8.self_attn.o_proj",
154
+ "model.language_model.layers.0.mlp.gate_proj",
155
+ "model.language_model.layers.17.mlp.gate_proj",
156
+ "model.language_model.layers.1.self_attn.o_proj",
157
+ "model.language_model.layers.14.self_attn.q_proj",
158
+ "model.language_model.layers.14.mlp.gate_proj",
159
+ "model.language_model.layers.12.mlp.down_proj",
160
+ "model.language_model.layers.21.self_attn.o_proj",
161
+ "model.language_model.layers.5.mlp.up_proj",
162
+ "model.language_model.layers.20.mlp.up_proj",
163
+ "model.language_model.layers.13.mlp.up_proj",
164
+ "model.language_model.layers.18.self_attn.k_proj",
165
+ "model.language_model.layers.23.mlp.gate_proj",
166
+ "model.language_model.layers.4.mlp.down_proj",
167
+ "model.language_model.layers.24.self_attn.o_proj",
168
+ "model.language_model.layers.28.self_attn.k_proj",
169
+ "model.language_model.layers.13.self_attn.v_proj",
170
+ "model.language_model.layers.6.mlp.down_proj",
171
+ "model.language_model.layers.13.mlp.down_proj",
172
+ "model.language_model.layers.21.self_attn.q_proj",
173
+ "model.language_model.layers.10.mlp.up_proj",
174
+ "model.language_model.layers.15.self_attn.v_proj",
175
+ "model.language_model.layers.0.self_attn.o_proj",
176
+ "model.language_model.layers.9.mlp.gate_proj",
177
+ "model.language_model.layers.16.mlp.up_proj",
178
+ "model.language_model.layers.11.self_attn.o_proj",
179
+ "model.language_model.layers.17.self_attn.o_proj",
180
+ "model.language_model.layers.20.mlp.gate_proj",
181
+ "model.language_model.layers.26.mlp.up_proj",
182
+ "model.language_model.layers.15.mlp.up_proj",
183
+ "model.language_model.layers.12.mlp.gate_proj",
184
+ "model.language_model.layers.22.self_attn.o_proj",
185
+ "model.language_model.layers.28.mlp.gate_proj",
186
+ "model.language_model.layers.21.mlp.gate_proj",
187
+ "model.language_model.layers.2.mlp.up_proj",
188
+ "model.language_model.layers.28.self_attn.q_proj",
189
+ "model.language_model.layers.29.self_attn.q_proj",
190
  "model.language_model.layers.7.mlp.up_proj",
191
+ "model.language_model.layers.15.self_attn.q_proj",
192
+ "model.language_model.layers.19.self_attn.k_proj",
193
+ "model.language_model.layers.7.self_attn.v_proj",
194
+ "model.language_model.layers.29.mlp.gate_proj",
195
+ "model.language_model.layers.24.self_attn.k_proj",
196
+ "model.language_model.layers.16.mlp.gate_proj",
197
+ "model.language_model.layers.12.self_attn.k_proj",
198
+ "model.language_model.layers.4.mlp.up_proj",
199
+ "model.language_model.layers.20.mlp.down_proj",
200
+ "model.language_model.layers.5.mlp.down_proj",
201
  "model.language_model.layers.22.mlp.down_proj",
202
+ "model.language_model.layers.3.self_attn.q_proj",
203
+ "model.language_model.layers.26.self_attn.k_proj",
204
  "model.language_model.layers.20.self_attn.o_proj",
205
+ "model.language_model.layers.24.self_attn.v_proj",
206
+ "model.language_model.layers.21.self_attn.v_proj",
207
+ "model.language_model.layers.19.self_attn.o_proj",
208
+ "model.language_model.layers.29.mlp.up_proj",
209
+ "model.language_model.layers.13.self_attn.k_proj",
210
+ "model.language_model.layers.2.self_attn.o_proj",
211
+ "model.language_model.layers.16.self_attn.k_proj",
212
+ "model.language_model.layers.22.self_attn.v_proj",
213
+ "model.language_model.layers.25.self_attn.v_proj",
214
+ "model.language_model.layers.25.mlp.down_proj",
215
+ "model.language_model.layers.4.mlp.gate_proj",
216
+ "model.language_model.layers.6.self_attn.o_proj",
217
+ "model.language_model.layers.25.self_attn.q_proj",
218
+ "model.language_model.layers.7.self_attn.k_proj",
219
+ "model.language_model.layers.11.mlp.up_proj",
220
+ "model.language_model.layers.20.self_attn.k_proj",
221
+ "model.language_model.layers.6.mlp.up_proj",
222
+ "model.language_model.layers.15.self_attn.k_proj",
223
+ "model.language_model.layers.19.mlp.up_proj",
224
+ "model.language_model.layers.12.self_attn.q_proj",
225
+ "model.language_model.layers.4.self_attn.q_proj",
226
+ "model.language_model.layers.18.self_attn.o_proj",
227
+ "model.language_model.layers.1.self_attn.v_proj",
228
+ "model.language_model.layers.15.self_attn.o_proj",
229
+ "model.language_model.layers.19.self_attn.v_proj",
230
+ "model.language_model.layers.6.self_attn.v_proj",
231
+ "model.language_model.layers.12.self_attn.v_proj",
232
+ "model.language_model.layers.3.self_attn.k_proj",
233
+ "model.language_model.layers.26.self_attn.q_proj",
234
+ "model.language_model.layers.1.self_attn.q_proj",
235
+ "model.language_model.layers.27.self_attn.v_proj",
236
+ "model.language_model.layers.3.self_attn.v_proj",
237
+ "model.language_model.layers.23.self_attn.q_proj"
238
  ],
239
  "target_parameters": null,
240
  "task_type": "CAUSAL_LM",
checkpoint-242/adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e6c31eb96af91247162f1fdd882f14bf908d1c3c4b7925203d58312809b5007e
3
  size 37232104
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60ea5f7283321d8560f953cfc7a6167372abf99e9234771f4aba7dfebccfd34d
3
  size 37232104
checkpoint-242/optimizer.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:01cc0e44a43e58c8b45161465df8b4fadbd469fd35fab1d5c423759ee5ba8a68
3
- size 38229709
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9335e6ec0028f7338bdc2494df22f046fc2a105e9276c73cd535e9d8829a1a61
3
+ size 38237839
checkpoint-242/rng_state.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:67a6c6b866e89a5917944b74173a0b8536ce4695e579297378c983b24e5a507b
3
  size 14645
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a243edb8b1db34fb7115e5de0e37c8594f6006258a70188264a63e2194320e48
3
  size 14645
checkpoint-242/scheduler.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7702a97a98d09da954174b71008538d77d6adeaeaeb17c9732ebeab8932e0e3e
3
  size 1465
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb3700bbbbe712ec1083cf3ee091f9a368b7965703741f5d69ab3e651526ba31
3
  size 1465
checkpoint-242/tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a2619fe11b50dbed06ac443c51d757b354d0b62d64baa514404d4e84e6713519
3
- size 32169780
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc8d3a0ce36466ccc1278bf987df5f71db1719b9ca6b4118264f45cb627bfe0f
3
+ size 32169626
checkpoint-242/tokenizer_config.json CHANGED
@@ -41,7 +41,7 @@
41
  "think_token": "<|think|>"
42
  },
43
  "pad_token": "<pad>",
44
- "padding_side": "left",
45
  "processor_class": "Gemma4Processor",
46
  "response_schema": {
47
  "properties": {
 
41
  "think_token": "<|think|>"
42
  },
43
  "pad_token": "<pad>",
44
+ "padding_side": "right",
45
  "processor_class": "Gemma4Processor",
46
  "response_schema": {
47
  "properties": {
checkpoint-242/trainer_state.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
  "best_global_step": 242,
3
- "best_metric": 0.6102388501167297,
4
- "best_model_checkpoint": "/home/plucky/ml-workspace/models/gemma4-26b-securecode/checkpoint-242",
5
  "epoch": 2.0,
6
  "eval_steps": 500,
7
  "global_step": 242,
@@ -10,265 +10,265 @@
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
- "entropy": 1.1113492242991925,
14
  "epoch": 0.0827300930713547,
15
- "grad_norm": 10.3125,
16
  "learning_rate": 1.8e-05,
17
- "loss": 93.48836059570313,
18
- "mean_token_accuracy": 0.4107020549476147,
19
- "num_tokens": 81920.0,
20
  "step": 10
21
  },
22
  {
23
- "entropy": 0.8875315530225635,
24
  "epoch": 0.1654601861427094,
25
- "grad_norm": 6.15625,
26
  "learning_rate": 3.8e-05,
27
- "loss": 67.76697998046875,
28
- "mean_token_accuracy": 0.5182974558323622,
29
- "num_tokens": 163840.0,
30
  "step": 20
31
  },
32
  {
33
- "entropy": 0.673606987670064,
34
  "epoch": 0.2481902792140641,
35
- "grad_norm": 2.421875,
36
  "learning_rate": 5.8e-05,
37
- "loss": 37.221334838867186,
38
- "mean_token_accuracy": 0.6476027386263012,
39
- "num_tokens": 245760.0,
40
  "step": 30
41
  },
42
  {
43
- "entropy": 1.0845661748200655,
44
  "epoch": 0.3309203722854188,
45
- "grad_norm": 1.3671875,
46
  "learning_rate": 7.800000000000001e-05,
47
- "loss": 22.017848205566406,
48
- "mean_token_accuracy": 0.7083170266821981,
49
- "num_tokens": 327680.0,
50
  "step": 40
51
  },
52
  {
53
- "entropy": 1.1636322166770696,
54
  "epoch": 0.4136504653567735,
55
- "grad_norm": 0.703125,
56
  "learning_rate": 9.8e-05,
57
- "loss": 17.47879638671875,
58
- "mean_token_accuracy": 0.7332558700814843,
59
- "num_tokens": 409600.0,
60
  "step": 50
61
  },
62
  {
63
- "entropy": 0.9551631901413202,
64
  "epoch": 0.4963805584281282,
65
- "grad_norm": 0.40625,
66
  "learning_rate": 0.000118,
67
- "loss": 15.09481201171875,
68
- "mean_token_accuracy": 0.7555772982537746,
69
- "num_tokens": 491520.0,
70
  "step": 60
71
  },
72
  {
73
- "entropy": 0.8048430571332574,
74
  "epoch": 0.5791106514994829,
75
- "grad_norm": 0.375,
76
  "learning_rate": 0.000138,
77
- "loss": 13.297686767578124,
78
- "mean_token_accuracy": 0.7774828754365444,
79
- "num_tokens": 573440.0,
80
  "step": 70
81
  },
82
  {
83
- "entropy": 0.8100443260744215,
84
  "epoch": 0.6618407445708376,
85
- "grad_norm": 0.4609375,
86
  "learning_rate": 0.00015800000000000002,
87
- "loss": 12.752572631835937,
88
- "mean_token_accuracy": 0.7837084107100963,
89
- "num_tokens": 655360.0,
90
  "step": 80
91
  },
92
  {
93
- "entropy": 0.7172152267768979,
94
  "epoch": 0.7445708376421923,
95
- "grad_norm": 2.1875,
96
  "learning_rate": 0.00017800000000000002,
97
- "loss": 11.629959106445312,
98
- "mean_token_accuracy": 0.799449609220028,
99
- "num_tokens": 737280.0,
100
  "step": 90
101
  },
102
  {
103
- "entropy": 0.7284062243998051,
104
  "epoch": 0.827300930713547,
105
- "grad_norm": 0.40625,
106
  "learning_rate": 0.00019800000000000002,
107
- "loss": 11.506278991699219,
108
- "mean_token_accuracy": 0.8022871781140566,
109
- "num_tokens": 819200.0,
110
  "step": 100
111
  },
112
  {
113
- "entropy": 0.6922262106090784,
114
  "epoch": 0.9100310237849017,
115
- "grad_norm": 0.341796875,
116
  "learning_rate": 0.00019942266891397815,
117
- "loss": 11.149666595458985,
118
- "mean_token_accuracy": 0.8068982377648354,
119
- "num_tokens": 901120.0,
120
  "step": 110
121
  },
122
  {
123
- "entropy": 0.6608987387269736,
124
  "epoch": 0.9927611168562565,
125
- "grad_norm": 0.373046875,
126
  "learning_rate": 0.00019743551343638324,
127
- "loss": 10.666960906982421,
128
- "mean_token_accuracy": 0.8124388422816992,
129
- "num_tokens": 983040.0,
130
  "step": 120
131
  },
132
  {
133
  "epoch": 1.0,
134
- "eval_entropy": 0.6862195637336997,
135
- "eval_loss": 0.6695265769958496,
136
- "eval_mean_token_accuracy": 0.8135074851124786,
137
- "eval_num_tokens": 990208.0,
138
- "eval_runtime": 255.0413,
139
- "eval_samples_per_second": 0.843,
140
- "eval_steps_per_second": 0.843,
141
  "step": 121
142
  },
143
  {
144
- "entropy": 0.6788679953617386,
145
  "epoch": 1.0744570837642193,
146
- "grad_norm": 0.3984375,
147
  "learning_rate": 0.00019405971991583108,
148
- "loss": 10.533837127685548,
149
- "mean_token_accuracy": 0.8129133717923225,
150
- "num_tokens": 1063936.0,
151
  "step": 130
152
  },
153
  {
154
- "entropy": 0.5800832805223763,
155
  "epoch": 1.157187176835574,
156
- "grad_norm": 0.333984375,
157
  "learning_rate": 0.00018934339971482674,
158
- "loss": 9.498150634765626,
159
- "mean_token_accuracy": 0.8281555753201246,
160
- "num_tokens": 1145856.0,
161
  "step": 140
162
  },
163
  {
164
- "entropy": 0.6344770405441522,
165
  "epoch": 1.2399172699069285,
166
- "grad_norm": 0.388671875,
167
  "learning_rate": 0.00018335376920472097,
168
- "loss": 10.217367553710938,
169
- "mean_token_accuracy": 0.8195327781140804,
170
- "num_tokens": 1227776.0,
171
  "step": 150
172
  },
173
  {
174
- "entropy": 0.6310219537466765,
175
  "epoch": 1.3226473629782833,
176
- "grad_norm": 0.380859375,
177
  "learning_rate": 0.00017617619180688085,
178
- "loss": 10.081737518310547,
179
- "mean_token_accuracy": 0.8219178050756455,
180
- "num_tokens": 1309696.0,
181
  "step": 160
182
  },
183
  {
184
- "entropy": 0.5863334746100008,
185
  "epoch": 1.4053774560496382,
186
- "grad_norm": 0.341796875,
187
  "learning_rate": 0.00016791296140450545,
188
- "loss": 9.392319488525391,
189
- "mean_token_accuracy": 0.8319227002561093,
190
- "num_tokens": 1391616.0,
191
  "step": 170
192
  },
193
  {
194
- "entropy": 0.6232900662347675,
195
  "epoch": 1.4881075491209927,
196
- "grad_norm": 0.44921875,
197
  "learning_rate": 0.0001586818444637402,
198
- "loss": 10.051438140869141,
199
- "mean_token_accuracy": 0.8215264175087214,
200
- "num_tokens": 1473536.0,
201
  "step": 180
202
  },
203
  {
204
- "entropy": 0.6163463215343654,
205
  "epoch": 1.5708376421923473,
206
- "grad_norm": 0.384765625,
207
  "learning_rate": 0.0001486144016415862,
208
- "loss": 9.878226470947265,
209
- "mean_token_accuracy": 0.8220768082886935,
210
- "num_tokens": 1555456.0,
211
  "step": 190
212
  },
213
  {
214
- "entropy": 0.588023800123483,
215
  "epoch": 1.6535677352637022,
216
- "grad_norm": 0.3515625,
217
  "learning_rate": 0.00013785411280082746,
218
- "loss": 9.45407943725586,
219
- "mean_token_accuracy": 0.8305283710360527,
220
- "num_tokens": 1637376.0,
221
  "step": 200
222
  },
223
  {
224
- "entropy": 0.599842881783843,
225
  "epoch": 1.736297828335057,
226
- "grad_norm": 0.37890625,
227
  "learning_rate": 0.00012655433215401438,
228
- "loss": 9.548422241210938,
229
- "mean_token_accuracy": 0.8284735765308142,
230
- "num_tokens": 1719296.0,
231
  "step": 210
232
  },
233
  {
234
- "entropy": 0.6552030782215297,
235
  "epoch": 1.8190279214064116,
236
- "grad_norm": 0.361328125,
237
  "learning_rate": 0.00011487610267952142,
238
- "loss": 10.46890640258789,
239
- "mean_token_accuracy": 0.8134295467287302,
240
- "num_tokens": 1801216.0,
241
  "step": 220
242
  },
243
  {
244
- "entropy": 0.5984975789207965,
245
  "epoch": 1.9017580144777662,
246
- "grad_norm": 0.353515625,
247
  "learning_rate": 0.00010298586095833151,
248
- "loss": 9.603475952148438,
249
- "mean_token_accuracy": 0.827079250663519,
250
- "num_tokens": 1883136.0,
251
  "step": 230
252
  },
253
  {
254
- "entropy": 0.5947112645488233,
255
  "epoch": 1.984488107549121,
256
- "grad_norm": 0.64453125,
257
  "learning_rate": 9.10530651419099e-05,
258
- "loss": 9.561953735351562,
259
- "mean_token_accuracy": 0.8265655554831028,
260
- "num_tokens": 1965056.0,
261
  "step": 240
262
  },
263
  {
264
  "epoch": 2.0,
265
- "eval_entropy": 0.6100467269503793,
266
- "eval_loss": 0.6102388501167297,
267
- "eval_mean_token_accuracy": 0.8254676164582718,
268
- "eval_num_tokens": 1980416.0,
269
- "eval_runtime": 254.828,
270
- "eval_samples_per_second": 0.844,
271
- "eval_steps_per_second": 0.844,
272
  "step": 242
273
  }
274
  ],
@@ -289,7 +289,7 @@
289
  "attributes": {}
290
  }
291
  },
292
- "total_flos": 2.980881204394721e+17,
293
  "train_batch_size": 1,
294
  "trial_name": null,
295
  "trial_params": null
 
1
  {
2
  "best_global_step": 242,
3
+ "best_metric": 0.44326454401016235,
4
+ "best_model_checkpoint": "/workspace/gemma4-26b-securecode/checkpoint-242",
5
  "epoch": 2.0,
6
  "eval_steps": 500,
7
  "global_step": 242,
 
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
+ "entropy": 1.0907821020111441,
14
  "epoch": 0.0827300930713547,
15
+ "grad_norm": 20.875,
16
  "learning_rate": 1.8e-05,
17
+ "loss": 80.26775512695312,
18
+ "mean_token_accuracy": 0.4542873948812485,
19
+ "num_tokens": 326185.0,
20
  "step": 10
21
  },
22
  {
23
+ "entropy": 0.8271314173936843,
24
  "epoch": 0.1654601861427094,
25
+ "grad_norm": 8.75,
26
  "learning_rate": 3.8e-05,
27
+ "loss": 58.08096923828125,
28
+ "mean_token_accuracy": 0.5611657274886965,
29
+ "num_tokens": 653865.0,
30
  "step": 20
31
  },
32
  {
33
+ "entropy": 0.4787554959766567,
34
  "epoch": 0.2481902792140641,
35
+ "grad_norm": 1.7109375,
36
  "learning_rate": 5.8e-05,
37
+ "loss": 25.493240356445312,
38
+ "mean_token_accuracy": 0.7378443486988544,
39
+ "num_tokens": 981337.0,
40
  "step": 30
41
  },
42
  {
43
+ "entropy": 0.7855595085769892,
44
  "epoch": 0.3309203722854188,
45
+ "grad_norm": 0.8671875,
46
  "learning_rate": 7.800000000000001e-05,
47
+ "loss": 14.629072570800782,
48
+ "mean_token_accuracy": 0.7917733617126942,
49
+ "num_tokens": 1308584.0,
50
  "step": 40
51
  },
52
  {
53
+ "entropy": 0.7569877350702882,
54
  "epoch": 0.4136504653567735,
55
+ "grad_norm": 2.109375,
56
  "learning_rate": 9.8e-05,
57
+ "loss": 12.609142303466797,
58
+ "mean_token_accuracy": 0.8013272784650326,
59
+ "num_tokens": 1635098.0,
60
  "step": 50
61
  },
62
  {
63
+ "entropy": 0.6735223602503538,
64
  "epoch": 0.4963805584281282,
65
+ "grad_norm": 16.875,
66
  "learning_rate": 0.000118,
67
+ "loss": 10.704925537109375,
68
+ "mean_token_accuracy": 0.8209844313561916,
69
+ "num_tokens": 1962302.0,
70
  "step": 60
71
  },
72
  {
73
+ "entropy": 0.6005677949637175,
74
  "epoch": 0.5791106514994829,
75
+ "grad_norm": 1.546875,
76
  "learning_rate": 0.000138,
77
+ "loss": 9.783185577392578,
78
+ "mean_token_accuracy": 0.8308866504579783,
79
+ "num_tokens": 2289982.0,
80
  "step": 70
81
  },
82
  {
83
+ "entropy": 0.5877057909965515,
84
  "epoch": 0.6618407445708376,
85
+ "grad_norm": 11.25,
86
  "learning_rate": 0.00015800000000000002,
87
+ "loss": 9.298844909667968,
88
+ "mean_token_accuracy": 0.8359990835189819,
89
+ "num_tokens": 2616786.0,
90
  "step": 80
91
  },
92
  {
93
+ "entropy": 0.5447238819673658,
94
  "epoch": 0.7445708376421923,
95
+ "grad_norm": 1.2890625,
96
  "learning_rate": 0.00017800000000000002,
97
+ "loss": 8.777264404296876,
98
+ "mean_token_accuracy": 0.8440194871276617,
99
+ "num_tokens": 2941975.0,
100
  "step": 90
101
  },
102
  {
103
+ "entropy": 0.5323287105187774,
104
  "epoch": 0.827300930713547,
105
+ "grad_norm": 0.70703125,
106
  "learning_rate": 0.00019800000000000002,
107
+ "loss": 8.489185333251953,
108
+ "mean_token_accuracy": 0.8486687760800123,
109
+ "num_tokens": 3269655.0,
110
  "step": 100
111
  },
112
  {
113
+ "entropy": 0.4949887519702315,
114
  "epoch": 0.9100310237849017,
115
+ "grad_norm": 0.439453125,
116
  "learning_rate": 0.00019942266891397815,
117
+ "loss": 8.192723083496094,
118
+ "mean_token_accuracy": 0.8528529018163681,
119
+ "num_tokens": 3595193.0,
120
  "step": 110
121
  },
122
  {
123
+ "entropy": 0.4980895221233368,
124
  "epoch": 0.9927611168562565,
125
+ "grad_norm": 0.921875,
126
  "learning_rate": 0.00019743551343638324,
127
+ "loss": 7.908926391601563,
128
+ "mean_token_accuracy": 0.8567473825067282,
129
+ "num_tokens": 3922475.0,
130
  "step": 120
131
  },
132
  {
133
  "epoch": 1.0,
134
+ "eval_entropy": 0.5629051625728607,
135
+ "eval_loss": 0.4985087513923645,
136
+ "eval_mean_token_accuracy": 0.8571729124978531,
137
+ "eval_num_tokens": 3949618.0,
138
+ "eval_runtime": 122.3216,
139
+ "eval_samples_per_second": 1.758,
140
+ "eval_steps_per_second": 1.758,
141
  "step": 121
142
  },
143
  {
144
+ "entropy": 0.5185692432937743,
145
  "epoch": 1.0744570837642193,
146
+ "grad_norm": 0.37890625,
147
  "learning_rate": 0.00019405971991583108,
148
+ "loss": 7.807546997070313,
149
+ "mean_token_accuracy": 0.8577670869947989,
150
+ "num_tokens": 4244530.0,
151
  "step": 130
152
  },
153
  {
154
+ "entropy": 0.4555334035307169,
155
  "epoch": 1.157187176835574,
156
+ "grad_norm": 0.1953125,
157
  "learning_rate": 0.00018934339971482674,
158
+ "loss": 7.464869689941406,
159
+ "mean_token_accuracy": 0.8638800706714391,
160
+ "num_tokens": 4572210.0,
161
  "step": 140
162
  },
163
  {
164
+ "entropy": 0.47754106651991607,
165
  "epoch": 1.2399172699069285,
166
+ "grad_norm": 1.21875,
167
  "learning_rate": 0.00018335376920472097,
168
+ "loss": 7.764054870605468,
169
+ "mean_token_accuracy": 0.8579531148076057,
170
+ "num_tokens": 4897694.0,
171
  "step": 150
172
  },
173
  {
174
+ "entropy": 0.4550897226668894,
175
  "epoch": 1.3226473629782833,
176
+ "grad_norm": 0.28515625,
177
  "learning_rate": 0.00017617619180688085,
178
+ "loss": 7.322466278076172,
179
+ "mean_token_accuracy": 0.8654993120580912,
180
+ "num_tokens": 5223716.0,
181
  "step": 160
182
  },
183
  {
184
+ "entropy": 0.4635292864404619,
185
  "epoch": 1.4053774560496382,
186
+ "grad_norm": 0.8203125,
187
  "learning_rate": 0.00016791296140450545,
188
+ "loss": 7.322091674804687,
189
+ "mean_token_accuracy": 0.8659177150577306,
190
+ "num_tokens": 5550963.0,
191
  "step": 170
192
  },
193
  {
194
+ "entropy": 0.46733071468770504,
195
  "epoch": 1.4881075491209927,
196
+ "grad_norm": 0.45703125,
197
  "learning_rate": 0.0001586818444637402,
198
+ "loss": 7.3169700622558596,
199
+ "mean_token_accuracy": 0.8659562785178423,
200
+ "num_tokens": 5878643.0,
201
  "step": 180
202
  },
203
  {
204
+ "entropy": 0.4562882795929909,
205
  "epoch": 1.5708376421923473,
206
+ "grad_norm": 0.302734375,
207
  "learning_rate": 0.0001486144016415862,
208
+ "loss": 7.291434478759766,
209
+ "mean_token_accuracy": 0.8656417932361364,
210
+ "num_tokens": 6206323.0,
211
  "step": 190
212
  },
213
  {
214
+ "entropy": 0.4341404400765896,
215
  "epoch": 1.6535677352637022,
216
+ "grad_norm": 0.30859375,
217
  "learning_rate": 0.00013785411280082746,
218
+ "loss": 6.904853057861328,
219
+ "mean_token_accuracy": 0.8713251128792763,
220
+ "num_tokens": 6533527.0,
221
  "step": 200
222
  },
223
  {
224
+ "entropy": 0.4494163889437914,
225
  "epoch": 1.736297828335057,
226
+ "grad_norm": 0.2177734375,
227
  "learning_rate": 0.00012655433215401438,
228
+ "loss": 7.195234680175782,
229
+ "mean_token_accuracy": 0.8673424527049065,
230
+ "num_tokens": 6861207.0,
231
  "step": 210
232
  },
233
  {
234
+ "entropy": 0.46514057284221055,
235
  "epoch": 1.8190279214064116,
236
+ "grad_norm": 0.220703125,
237
  "learning_rate": 0.00011487610267952142,
238
+ "loss": 7.431344604492187,
239
+ "mean_token_accuracy": 0.8633232209831476,
240
+ "num_tokens": 7188281.0,
241
  "step": 220
242
  },
243
  {
244
+ "entropy": 0.43800092255696654,
245
  "epoch": 1.9017580144777662,
246
+ "grad_norm": 0.1962890625,
247
  "learning_rate": 0.00010298586095833151,
248
+ "loss": 7.023017883300781,
249
+ "mean_token_accuracy": 0.8693898901343345,
250
+ "num_tokens": 7513788.0,
251
  "step": 230
252
  },
253
  {
254
+ "entropy": 0.44280060222372414,
255
  "epoch": 1.984488107549121,
256
+ "grad_norm": 0.453125,
257
  "learning_rate": 9.10530651419099e-05,
258
+ "loss": 7.070135498046875,
259
+ "mean_token_accuracy": 0.8684426795691251,
260
+ "num_tokens": 7839097.0,
261
  "step": 240
262
  },
263
  {
264
  "epoch": 2.0,
265
+ "eval_entropy": 0.43673129948072653,
266
+ "eval_loss": 0.44326454401016235,
267
+ "eval_mean_token_accuracy": 0.8689799866010977,
268
+ "eval_num_tokens": 7899236.0,
269
+ "eval_runtime": 122.7611,
270
+ "eval_samples_per_second": 1.751,
271
+ "eval_steps_per_second": 1.751,
272
  "step": 242
273
  }
274
  ],
 
289
  "attributes": {}
290
  }
291
  },
292
+ "total_flos": 1.1889766655832988e+18,
293
  "train_batch_size": 1,
294
  "trial_name": null,
295
  "trial_params": null
checkpoint-242/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2eaf4e1eba101412810b250e27914b2df87f93b0a9c62028451f50813e692b8e
3
  size 5713
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bdf412780aad6b6bc055248dd1640e0d1a2282e1c11f28390eac7fae5fae303
3
  size 5713
checkpoint-363/adapter_config.json CHANGED
@@ -24,217 +24,217 @@
24
  "megatron_core": "megatron.core",
25
  "modules_to_save": null,
26
  "peft_type": "LORA",
27
- "peft_version": "0.18.2.dev0@e7355a3b2233820f6f30e558ce133ed22673a087",
28
  "qalora_group_size": 16,
29
  "r": 16,
30
  "rank_pattern": {},
31
  "revision": null,
32
  "target_modules": [
33
- "model.language_model.layers.4.self_attn.k_proj",
34
- "model.language_model.layers.17.self_attn.o_proj",
35
- "model.language_model.layers.3.mlp.up_proj",
 
 
 
 
 
 
36
  "model.language_model.layers.17.mlp.up_proj",
37
- "model.language_model.layers.8.mlp.down_proj",
 
 
 
 
 
38
  "model.language_model.layers.27.self_attn.k_proj",
39
- "model.language_model.layers.28.mlp.down_proj",
40
- "model.language_model.layers.6.mlp.up_proj",
41
- "model.language_model.layers.24.self_attn.k_proj",
42
- "model.language_model.layers.6.self_attn.q_proj",
43
- "model.language_model.layers.17.self_attn.q_proj",
44
- "model.language_model.layers.15.self_attn.k_proj",
45
- "model.language_model.layers.24.mlp.up_proj",
46
- "model.language_model.layers.19.mlp.gate_proj",
47
- "model.language_model.layers.16.self_attn.k_proj",
48
- "model.language_model.layers.26.self_attn.q_proj",
49
- "model.language_model.layers.21.mlp.up_proj",
50
- "model.language_model.layers.17.mlp.down_proj",
51
- "model.language_model.layers.10.self_attn.v_proj",
52
- "model.language_model.layers.25.mlp.down_proj",
53
- "model.language_model.layers.11.mlp.up_proj",
54
- "model.language_model.layers.2.self_attn.o_proj",
55
- "model.language_model.layers.15.mlp.down_proj",
56
- "model.language_model.layers.10.self_attn.k_proj",
57
- "model.language_model.layers.15.self_attn.q_proj",
58
- "model.language_model.layers.9.self_attn.v_proj",
59
- "model.language_model.layers.27.self_attn.o_proj",
60
- "model.language_model.layers.3.self_attn.v_proj",
61
- "model.language_model.layers.10.self_attn.q_proj",
62
- "model.language_model.layers.21.mlp.gate_proj",
63
- "model.language_model.layers.25.self_attn.q_proj",
64
  "model.language_model.layers.5.self_attn.o_proj",
 
 
65
  "model.language_model.layers.2.mlp.gate_proj",
66
- "model.language_model.layers.9.mlp.gate_proj",
67
- "model.language_model.layers.19.self_attn.v_proj",
68
- "model.language_model.layers.18.self_attn.k_proj",
69
- "model.language_model.layers.19.mlp.down_proj",
70
- "model.language_model.layers.23.self_attn.o_proj",
71
- "model.language_model.layers.27.mlp.gate_proj",
72
- "model.language_model.layers.0.mlp.up_proj",
73
- "model.language_model.layers.20.mlp.gate_proj",
74
- "model.language_model.layers.28.self_attn.o_proj",
75
- "model.language_model.layers.4.self_attn.o_proj",
76
- "model.language_model.layers.28.self_attn.v_proj",
77
- "model.language_model.layers.11.self_attn.q_proj",
78
  "model.language_model.layers.26.self_attn.o_proj",
79
- "model.language_model.layers.9.mlp.down_proj",
80
- "model.language_model.layers.27.self_attn.v_proj",
81
- "model.language_model.layers.23.mlp.up_proj",
82
- "model.language_model.layers.2.mlp.up_proj",
83
- "model.language_model.layers.0.mlp.gate_proj",
84
- "model.language_model.layers.18.self_attn.o_proj",
85
- "model.language_model.layers.19.self_attn.k_proj",
86
- "model.language_model.layers.10.mlp.down_proj",
87
- "model.language_model.layers.10.mlp.gate_proj",
88
- "model.language_model.layers.0.self_attn.o_proj",
89
- "model.language_model.layers.20.mlp.down_proj",
90
- "model.language_model.layers.10.self_attn.o_proj",
91
- "model.language_model.layers.15.self_attn.o_proj",
92
- "model.language_model.layers.18.mlp.down_proj",
93
- "model.language_model.layers.1.self_attn.v_proj",
94
- "model.language_model.layers.13.self_attn.q_proj",
95
- "model.language_model.layers.18.self_attn.q_proj",
96
- "model.language_model.layers.3.mlp.down_proj",
97
- "model.language_model.layers.20.self_attn.k_proj",
98
- "model.language_model.layers.14.self_attn.o_proj",
99
- "model.language_model.layers.7.mlp.down_proj",
100
- "model.language_model.layers.25.self_attn.v_proj",
101
- "model.language_model.layers.29.mlp.gate_proj",
102
- "model.language_model.layers.2.self_attn.k_proj",
103
- "model.language_model.layers.5.self_attn.k_proj",
104
- "model.language_model.layers.9.self_attn.k_proj",
105
- "model.language_model.layers.1.mlp.gate_proj",
106
- "model.language_model.layers.8.self_attn.o_proj",
107
- "model.language_model.layers.22.self_attn.k_proj",
108
- "model.language_model.layers.3.self_attn.q_proj",
109
- "model.language_model.layers.23.self_attn.k_proj",
110
- "model.language_model.layers.3.self_attn.k_proj",
111
- "model.language_model.layers.19.self_attn.q_proj",
112
- "model.language_model.layers.18.self_attn.v_proj",
113
- "model.language_model.layers.10.mlp.up_proj",
114
- "model.language_model.layers.11.mlp.gate_proj",
115
- "model.language_model.layers.1.mlp.up_proj",
116
- "model.language_model.layers.18.mlp.gate_proj",
117
- "model.language_model.layers.8.mlp.gate_proj",
118
  "model.language_model.layers.7.mlp.gate_proj",
 
 
 
 
 
 
 
 
119
  "model.language_model.layers.8.mlp.up_proj",
120
- "model.language_model.layers.5.self_attn.q_proj",
121
- "model.language_model.layers.14.self_attn.k_proj",
122
- "model.language_model.layers.22.self_attn.q_proj",
123
- "model.language_model.layers.4.mlp.down_proj",
124
- "model.language_model.layers.22.mlp.gate_proj",
125
- "model.language_model.layers.15.self_attn.v_proj",
126
- "model.language_model.layers.21.self_attn.o_proj",
127
- "model.language_model.layers.11.self_attn.o_proj",
128
- "model.language_model.layers.20.mlp.up_proj",
129
- "model.language_model.layers.16.self_attn.q_proj",
130
- "model.language_model.layers.1.self_attn.k_proj",
131
- "model.language_model.layers.24.mlp.gate_proj",
132
- "model.language_model.layers.26.mlp.gate_proj",
133
  "model.language_model.layers.2.self_attn.q_proj",
134
- "model.language_model.layers.4.mlp.gate_proj",
 
 
 
 
 
 
 
 
135
  "model.language_model.layers.7.self_attn.q_proj",
136
- "model.language_model.layers.14.self_attn.v_proj",
137
- "model.language_model.layers.27.self_attn.q_proj",
138
- "model.language_model.layers.29.mlp.up_proj",
139
- "model.language_model.layers.28.self_attn.k_proj",
140
- "model.language_model.layers.24.self_attn.o_proj",
141
- "model.language_model.layers.26.self_attn.k_proj",
142
- "model.language_model.layers.21.mlp.down_proj",
143
- "model.language_model.layers.14.mlp.gate_proj",
144
- "model.language_model.layers.25.mlp.up_proj",
145
  "model.language_model.layers.27.mlp.down_proj",
146
- "model.language_model.layers.20.self_attn.v_proj",
147
- "model.language_model.layers.0.mlp.down_proj",
148
- "model.language_model.layers.6.self_attn.v_proj",
149
- "model.language_model.layers.4.self_attn.q_proj",
150
- "model.language_model.layers.9.self_attn.q_proj",
151
- "model.language_model.layers.0.self_attn.q_proj",
152
- "model.language_model.layers.27.mlp.up_proj",
153
- "model.language_model.layers.29.self_attn.k_proj",
154
- "model.language_model.layers.29.self_attn.q_proj",
155
- "model.language_model.layers.12.mlp.up_proj",
156
- "model.language_model.layers.6.mlp.down_proj",
157
  "model.language_model.layers.2.mlp.down_proj",
 
158
  "model.language_model.layers.6.mlp.gate_proj",
159
- "model.language_model.layers.24.self_attn.v_proj",
160
- "model.language_model.layers.4.mlp.up_proj",
161
  "model.language_model.layers.9.self_attn.o_proj",
162
- "model.language_model.layers.22.self_attn.v_proj",
163
- "model.language_model.layers.23.mlp.gate_proj",
164
- "model.language_model.layers.5.mlp.down_proj",
165
- "model.language_model.layers.13.self_attn.o_proj",
166
- "model.language_model.layers.14.mlp.up_proj",
167
- "model.language_model.layers.15.mlp.gate_proj",
168
- "model.language_model.layers.19.self_attn.o_proj",
169
- "model.language_model.layers.24.mlp.down_proj",
170
- "model.language_model.layers.21.self_attn.q_proj",
171
- "model.language_model.layers.15.mlp.up_proj",
172
- "model.language_model.layers.26.mlp.up_proj",
173
- "model.language_model.layers.26.mlp.down_proj",
174
- "model.language_model.layers.25.self_attn.o_proj",
175
  "model.language_model.layers.8.self_attn.v_proj",
176
- "model.language_model.layers.12.self_attn.o_proj",
177
- "model.language_model.layers.6.self_attn.k_proj",
178
- "model.language_model.layers.17.mlp.gate_proj",
179
- "model.language_model.layers.12.self_attn.k_proj",
180
- "model.language_model.layers.13.mlp.down_proj",
181
- "model.language_model.layers.1.mlp.down_proj",
182
  "model.language_model.layers.3.mlp.gate_proj",
183
- "model.language_model.layers.14.mlp.down_proj",
 
 
 
 
 
184
  "model.language_model.layers.9.mlp.up_proj",
185
- "model.language_model.layers.21.self_attn.k_proj",
186
- "model.language_model.layers.6.self_attn.o_proj",
187
- "model.language_model.layers.0.self_attn.v_proj",
188
- "model.language_model.layers.16.mlp.down_proj",
 
 
 
 
 
 
 
 
 
189
  "model.language_model.layers.8.self_attn.k_proj",
190
- "model.language_model.layers.12.mlp.gate_proj",
191
- "model.language_model.layers.7.self_attn.o_proj",
192
- "model.language_model.layers.18.mlp.up_proj",
193
- "model.language_model.layers.13.mlp.up_proj",
194
- "model.language_model.layers.16.mlp.up_proj",
195
- "model.language_model.layers.17.self_attn.k_proj",
196
- "model.language_model.layers.25.self_attn.k_proj",
197
- "model.language_model.layers.8.self_attn.q_proj",
198
  "model.language_model.layers.4.self_attn.v_proj",
199
- "model.language_model.layers.23.self_attn.q_proj",
200
- "model.language_model.layers.1.self_attn.o_proj",
201
- "model.language_model.layers.5.mlp.up_proj",
202
- "model.language_model.layers.13.self_attn.k_proj",
203
- "model.language_model.layers.7.self_attn.k_proj",
204
- "model.language_model.layers.22.self_attn.o_proj",
205
- "model.language_model.layers.22.mlp.up_proj",
206
- "model.language_model.layers.16.self_attn.o_proj",
207
- "model.language_model.layers.24.self_attn.q_proj",
208
- "model.language_model.layers.12.self_attn.q_proj",
209
  "model.language_model.layers.2.self_attn.v_proj",
210
- "model.language_model.layers.12.self_attn.v_proj",
 
 
 
 
 
211
  "model.language_model.layers.13.mlp.gate_proj",
212
- "model.language_model.layers.12.mlp.down_proj",
213
- "model.language_model.layers.14.self_attn.q_proj",
214
- "model.language_model.layers.26.self_attn.v_proj",
215
- "model.language_model.layers.28.mlp.up_proj",
216
- "model.language_model.layers.19.mlp.up_proj",
217
- "model.language_model.layers.16.mlp.gate_proj",
218
- "model.language_model.layers.7.self_attn.v_proj",
219
- "model.language_model.layers.25.mlp.gate_proj",
220
- "model.language_model.layers.13.self_attn.v_proj",
221
  "model.language_model.layers.20.self_attn.q_proj",
222
- "model.language_model.layers.5.mlp.gate_proj",
223
- "model.language_model.layers.1.self_attn.q_proj",
224
- "model.language_model.layers.11.mlp.down_proj",
225
- "model.language_model.layers.0.self_attn.k_proj",
226
- "model.language_model.layers.21.self_attn.v_proj",
227
- "model.language_model.layers.28.self_attn.q_proj",
228
  "model.language_model.layers.29.self_attn.o_proj",
229
  "model.language_model.layers.11.self_attn.k_proj",
230
- "model.language_model.layers.29.mlp.down_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
231
  "model.language_model.layers.7.mlp.up_proj",
 
 
 
 
 
 
 
 
 
 
232
  "model.language_model.layers.22.mlp.down_proj",
 
 
233
  "model.language_model.layers.20.self_attn.o_proj",
234
- "model.language_model.layers.3.self_attn.o_proj",
235
- "model.language_model.layers.23.mlp.down_proj",
236
- "model.language_model.layers.16.self_attn.v_proj",
237
- "model.language_model.layers.28.mlp.gate_proj"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
238
  ],
239
  "target_parameters": null,
240
  "task_type": "CAUSAL_LM",
 
24
  "megatron_core": "megatron.core",
25
  "modules_to_save": null,
26
  "peft_type": "LORA",
27
+ "peft_version": "0.18.2.dev0@7a4b07f2070162972f8c0515bc3acd19f81c0ad7",
28
  "qalora_group_size": 16,
29
  "r": 16,
30
  "rank_pattern": {},
31
  "revision": null,
32
  "target_modules": [
33
+ "model.language_model.layers.17.self_attn.q_proj",
34
+ "model.language_model.layers.7.self_attn.o_proj",
35
+ "model.language_model.layers.3.mlp.down_proj",
36
+ "model.language_model.layers.14.mlp.up_proj",
37
+ "model.language_model.layers.17.self_attn.k_proj",
38
+ "model.language_model.layers.25.self_attn.o_proj",
39
+ "model.language_model.layers.6.self_attn.q_proj",
40
+ "model.language_model.layers.5.self_attn.q_proj",
41
+ "model.language_model.layers.1.mlp.gate_proj",
42
  "model.language_model.layers.17.mlp.up_proj",
43
+ "model.language_model.layers.5.self_attn.k_proj",
44
+ "model.language_model.layers.16.self_attn.o_proj",
45
+ "model.language_model.layers.18.mlp.up_proj",
46
+ "model.language_model.layers.25.self_attn.k_proj",
47
+ "model.language_model.layers.23.mlp.down_proj",
48
+ "model.language_model.layers.27.mlp.up_proj",
49
  "model.language_model.layers.27.self_attn.k_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  "model.language_model.layers.5.self_attn.o_proj",
51
+ "model.language_model.layers.22.self_attn.k_proj",
52
+ "model.language_model.layers.1.mlp.down_proj",
53
  "model.language_model.layers.2.mlp.gate_proj",
 
 
 
 
 
 
 
 
 
 
 
 
54
  "model.language_model.layers.26.self_attn.o_proj",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  "model.language_model.layers.7.mlp.gate_proj",
56
+ "model.language_model.layers.24.self_attn.q_proj",
57
+ "model.language_model.layers.3.self_attn.o_proj",
58
+ "model.language_model.layers.0.self_attn.q_proj",
59
+ "model.language_model.layers.21.self_attn.k_proj",
60
+ "model.language_model.layers.23.self_attn.o_proj",
61
+ "model.language_model.layers.9.self_attn.q_proj",
62
+ "model.language_model.layers.5.mlp.gate_proj",
63
+ "model.language_model.layers.10.self_attn.v_proj",
64
  "model.language_model.layers.8.mlp.up_proj",
65
+ "model.language_model.layers.26.self_attn.v_proj",
 
 
 
 
 
 
 
 
 
 
 
 
66
  "model.language_model.layers.2.self_attn.q_proj",
67
+ "model.language_model.layers.13.self_attn.o_proj",
68
+ "model.language_model.layers.7.mlp.down_proj",
69
+ "model.language_model.layers.24.mlp.down_proj",
70
+ "model.language_model.layers.6.self_attn.k_proj",
71
+ "model.language_model.layers.0.self_attn.k_proj",
72
+ "model.language_model.layers.1.mlp.up_proj",
73
+ "model.language_model.layers.28.mlp.down_proj",
74
+ "model.language_model.layers.2.self_attn.k_proj",
75
+ "model.language_model.layers.22.mlp.up_proj",
76
  "model.language_model.layers.7.self_attn.q_proj",
77
+ "model.language_model.layers.22.self_attn.q_proj",
 
 
 
 
 
 
 
 
78
  "model.language_model.layers.27.mlp.down_proj",
 
 
 
 
 
 
 
 
 
 
 
79
  "model.language_model.layers.2.mlp.down_proj",
80
+ "model.language_model.layers.19.mlp.down_proj",
81
  "model.language_model.layers.6.mlp.gate_proj",
 
 
82
  "model.language_model.layers.9.self_attn.o_proj",
83
+ "model.language_model.layers.15.mlp.down_proj",
84
+ "model.language_model.layers.4.self_attn.o_proj",
85
+ "model.language_model.layers.29.self_attn.k_proj",
86
+ "model.language_model.layers.18.self_attn.q_proj",
87
+ "model.language_model.layers.11.mlp.down_proj",
88
+ "model.language_model.layers.26.mlp.gate_proj",
89
+ "model.language_model.layers.23.mlp.up_proj",
90
+ "model.language_model.layers.0.mlp.down_proj",
 
 
 
 
 
91
  "model.language_model.layers.8.self_attn.v_proj",
92
+ "model.language_model.layers.14.self_attn.k_proj",
93
+ "model.language_model.layers.21.mlp.up_proj",
94
+ "model.language_model.layers.10.self_attn.o_proj",
95
+ "model.language_model.layers.24.mlp.gate_proj",
96
+ "model.language_model.layers.28.mlp.up_proj",
97
+ "model.language_model.layers.29.mlp.down_proj",
98
  "model.language_model.layers.3.mlp.gate_proj",
99
+ "model.language_model.layers.8.mlp.down_proj",
100
+ "model.language_model.layers.9.mlp.down_proj",
101
+ "model.language_model.layers.18.mlp.down_proj",
102
+ "model.language_model.layers.19.mlp.gate_proj",
103
+ "model.language_model.layers.26.mlp.down_proj",
104
+ "model.language_model.layers.9.self_attn.v_proj",
105
  "model.language_model.layers.9.mlp.up_proj",
106
+ "model.language_model.layers.10.self_attn.q_proj",
107
+ "model.language_model.layers.11.self_attn.q_proj",
108
+ "model.language_model.layers.18.mlp.gate_proj",
109
+ "model.language_model.layers.16.self_attn.v_proj",
110
+ "model.language_model.layers.1.self_attn.k_proj",
111
+ "model.language_model.layers.25.mlp.up_proj",
112
+ "model.language_model.layers.28.self_attn.v_proj",
113
+ "model.language_model.layers.15.mlp.gate_proj",
114
+ "model.language_model.layers.9.self_attn.k_proj",
115
+ "model.language_model.layers.27.mlp.gate_proj",
116
+ "model.language_model.layers.14.self_attn.o_proj",
117
+ "model.language_model.layers.22.mlp.gate_proj",
118
+ "model.language_model.layers.14.mlp.down_proj",
119
  "model.language_model.layers.8.self_attn.k_proj",
120
+ "model.language_model.layers.12.self_attn.o_proj",
 
 
 
 
 
 
 
121
  "model.language_model.layers.4.self_attn.v_proj",
122
+ "model.language_model.layers.10.mlp.down_proj",
123
+ "model.language_model.layers.24.mlp.up_proj",
124
+ "model.language_model.layers.25.mlp.gate_proj",
 
 
 
 
 
 
 
125
  "model.language_model.layers.2.self_attn.v_proj",
126
+ "model.language_model.layers.4.self_attn.k_proj",
127
+ "model.language_model.layers.8.self_attn.q_proj",
128
+ "model.language_model.layers.18.self_attn.v_proj",
129
+ "model.language_model.layers.27.self_attn.o_proj",
130
+ "model.language_model.layers.16.self_attn.q_proj",
131
+ "model.language_model.layers.3.mlp.up_proj",
132
  "model.language_model.layers.13.mlp.gate_proj",
133
+ "model.language_model.layers.17.mlp.down_proj",
134
+ "model.language_model.layers.28.self_attn.o_proj",
 
 
 
 
 
 
 
135
  "model.language_model.layers.20.self_attn.q_proj",
136
+ "model.language_model.layers.0.mlp.up_proj",
137
+ "model.language_model.layers.16.mlp.down_proj",
 
 
 
 
138
  "model.language_model.layers.29.self_attn.o_proj",
139
  "model.language_model.layers.11.self_attn.k_proj",
140
+ "model.language_model.layers.20.self_attn.v_proj",
141
+ "model.language_model.layers.14.self_attn.v_proj",
142
+ "model.language_model.layers.11.mlp.gate_proj",
143
+ "model.language_model.layers.21.mlp.down_proj",
144
+ "model.language_model.layers.12.mlp.up_proj",
145
+ "model.language_model.layers.10.mlp.gate_proj",
146
+ "model.language_model.layers.10.self_attn.k_proj",
147
+ "model.language_model.layers.27.self_attn.q_proj",
148
+ "model.language_model.layers.8.mlp.gate_proj",
149
+ "model.language_model.layers.19.self_attn.q_proj",
150
+ "model.language_model.layers.23.self_attn.k_proj",
151
+ "model.language_model.layers.13.self_attn.q_proj",
152
+ "model.language_model.layers.0.self_attn.v_proj",
153
+ "model.language_model.layers.8.self_attn.o_proj",
154
+ "model.language_model.layers.0.mlp.gate_proj",
155
+ "model.language_model.layers.17.mlp.gate_proj",
156
+ "model.language_model.layers.1.self_attn.o_proj",
157
+ "model.language_model.layers.14.self_attn.q_proj",
158
+ "model.language_model.layers.14.mlp.gate_proj",
159
+ "model.language_model.layers.12.mlp.down_proj",
160
+ "model.language_model.layers.21.self_attn.o_proj",
161
+ "model.language_model.layers.5.mlp.up_proj",
162
+ "model.language_model.layers.20.mlp.up_proj",
163
+ "model.language_model.layers.13.mlp.up_proj",
164
+ "model.language_model.layers.18.self_attn.k_proj",
165
+ "model.language_model.layers.23.mlp.gate_proj",
166
+ "model.language_model.layers.4.mlp.down_proj",
167
+ "model.language_model.layers.24.self_attn.o_proj",
168
+ "model.language_model.layers.28.self_attn.k_proj",
169
+ "model.language_model.layers.13.self_attn.v_proj",
170
+ "model.language_model.layers.6.mlp.down_proj",
171
+ "model.language_model.layers.13.mlp.down_proj",
172
+ "model.language_model.layers.21.self_attn.q_proj",
173
+ "model.language_model.layers.10.mlp.up_proj",
174
+ "model.language_model.layers.15.self_attn.v_proj",
175
+ "model.language_model.layers.0.self_attn.o_proj",
176
+ "model.language_model.layers.9.mlp.gate_proj",
177
+ "model.language_model.layers.16.mlp.up_proj",
178
+ "model.language_model.layers.11.self_attn.o_proj",
179
+ "model.language_model.layers.17.self_attn.o_proj",
180
+ "model.language_model.layers.20.mlp.gate_proj",
181
+ "model.language_model.layers.26.mlp.up_proj",
182
+ "model.language_model.layers.15.mlp.up_proj",
183
+ "model.language_model.layers.12.mlp.gate_proj",
184
+ "model.language_model.layers.22.self_attn.o_proj",
185
+ "model.language_model.layers.28.mlp.gate_proj",
186
+ "model.language_model.layers.21.mlp.gate_proj",
187
+ "model.language_model.layers.2.mlp.up_proj",
188
+ "model.language_model.layers.28.self_attn.q_proj",
189
+ "model.language_model.layers.29.self_attn.q_proj",
190
  "model.language_model.layers.7.mlp.up_proj",
191
+ "model.language_model.layers.15.self_attn.q_proj",
192
+ "model.language_model.layers.19.self_attn.k_proj",
193
+ "model.language_model.layers.7.self_attn.v_proj",
194
+ "model.language_model.layers.29.mlp.gate_proj",
195
+ "model.language_model.layers.24.self_attn.k_proj",
196
+ "model.language_model.layers.16.mlp.gate_proj",
197
+ "model.language_model.layers.12.self_attn.k_proj",
198
+ "model.language_model.layers.4.mlp.up_proj",
199
+ "model.language_model.layers.20.mlp.down_proj",
200
+ "model.language_model.layers.5.mlp.down_proj",
201
  "model.language_model.layers.22.mlp.down_proj",
202
+ "model.language_model.layers.3.self_attn.q_proj",
203
+ "model.language_model.layers.26.self_attn.k_proj",
204
  "model.language_model.layers.20.self_attn.o_proj",
205
+ "model.language_model.layers.24.self_attn.v_proj",
206
+ "model.language_model.layers.21.self_attn.v_proj",
207
+ "model.language_model.layers.19.self_attn.o_proj",
208
+ "model.language_model.layers.29.mlp.up_proj",
209
+ "model.language_model.layers.13.self_attn.k_proj",
210
+ "model.language_model.layers.2.self_attn.o_proj",
211
+ "model.language_model.layers.16.self_attn.k_proj",
212
+ "model.language_model.layers.22.self_attn.v_proj",
213
+ "model.language_model.layers.25.self_attn.v_proj",
214
+ "model.language_model.layers.25.mlp.down_proj",
215
+ "model.language_model.layers.4.mlp.gate_proj",
216
+ "model.language_model.layers.6.self_attn.o_proj",
217
+ "model.language_model.layers.25.self_attn.q_proj",
218
+ "model.language_model.layers.7.self_attn.k_proj",
219
+ "model.language_model.layers.11.mlp.up_proj",
220
+ "model.language_model.layers.20.self_attn.k_proj",
221
+ "model.language_model.layers.6.mlp.up_proj",
222
+ "model.language_model.layers.15.self_attn.k_proj",
223
+ "model.language_model.layers.19.mlp.up_proj",
224
+ "model.language_model.layers.12.self_attn.q_proj",
225
+ "model.language_model.layers.4.self_attn.q_proj",
226
+ "model.language_model.layers.18.self_attn.o_proj",
227
+ "model.language_model.layers.1.self_attn.v_proj",
228
+ "model.language_model.layers.15.self_attn.o_proj",
229
+ "model.language_model.layers.19.self_attn.v_proj",
230
+ "model.language_model.layers.6.self_attn.v_proj",
231
+ "model.language_model.layers.12.self_attn.v_proj",
232
+ "model.language_model.layers.3.self_attn.k_proj",
233
+ "model.language_model.layers.26.self_attn.q_proj",
234
+ "model.language_model.layers.1.self_attn.q_proj",
235
+ "model.language_model.layers.27.self_attn.v_proj",
236
+ "model.language_model.layers.3.self_attn.v_proj",
237
+ "model.language_model.layers.23.self_attn.q_proj"
238
  ],
239
  "target_parameters": null,
240
  "task_type": "CAUSAL_LM",
checkpoint-363/adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5fc9c87e6f1fd7f94d16281ee425e6587460ef9e8104aac93d25e0de6b7b31a9
3
  size 37232104
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4527c85eaf804e53a289f91401b432b19a8b6349499a84dafcdb818b609b01a5
3
  size 37232104
checkpoint-363/optimizer.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:704bd56fcfb881393baa83113d34a7cf3d7745744dc252c023ac9d7dee1ed1e8
3
- size 38230093
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b804b2955b030b1c2abd00459acb0ce56cea15fc6e13966e3a73a2e51f70590e
3
+ size 38238223
checkpoint-363/rng_state.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b277407ef7cf34b19b7f76b4063ef02c942ff37191ea6603533d3e5bb877696d
3
  size 14645
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae282780d1020b3190a6ba66893846c3b873243e07557d4974d44616c175df20
3
  size 14645
checkpoint-363/scheduler.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a27b1dfda3b0692906cbec10c46258cb4f99d28ec188b1d9b066e584a6708792
3
  size 1465
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4e1367a2173a2cbb4dcde69fb00dd0c19fe22659c816858cfd69bdabf057cea
3
  size 1465
checkpoint-363/tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a2619fe11b50dbed06ac443c51d757b354d0b62d64baa514404d4e84e6713519
3
- size 32169780
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc8d3a0ce36466ccc1278bf987df5f71db1719b9ca6b4118264f45cb627bfe0f
3
+ size 32169626
checkpoint-363/tokenizer_config.json CHANGED
@@ -41,7 +41,7 @@
41
  "think_token": "<|think|>"
42
  },
43
  "pad_token": "<pad>",
44
- "padding_side": "left",
45
  "processor_class": "Gemma4Processor",
46
  "response_schema": {
47
  "properties": {
 
41
  "think_token": "<|think|>"
42
  },
43
  "pad_token": "<pad>",
44
+ "padding_side": "right",
45
  "processor_class": "Gemma4Processor",
46
  "response_schema": {
47
  "properties": {
checkpoint-363/trainer_state.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
  "best_global_step": 363,
3
- "best_metric": 0.6060348153114319,
4
- "best_model_checkpoint": "/home/plucky/ml-workspace/models/gemma4-26b-securecode/checkpoint-363",
5
  "epoch": 3.0,
6
  "eval_steps": 500,
7
  "global_step": 363,
@@ -10,396 +10,396 @@
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
- "entropy": 1.1113492242991925,
14
  "epoch": 0.0827300930713547,
15
- "grad_norm": 10.3125,
16
  "learning_rate": 1.8e-05,
17
- "loss": 93.48836059570313,
18
- "mean_token_accuracy": 0.4107020549476147,
19
- "num_tokens": 81920.0,
20
  "step": 10
21
  },
22
  {
23
- "entropy": 0.8875315530225635,
24
  "epoch": 0.1654601861427094,
25
- "grad_norm": 6.15625,
26
  "learning_rate": 3.8e-05,
27
- "loss": 67.76697998046875,
28
- "mean_token_accuracy": 0.5182974558323622,
29
- "num_tokens": 163840.0,
30
  "step": 20
31
  },
32
  {
33
- "entropy": 0.673606987670064,
34
  "epoch": 0.2481902792140641,
35
- "grad_norm": 2.421875,
36
  "learning_rate": 5.8e-05,
37
- "loss": 37.221334838867186,
38
- "mean_token_accuracy": 0.6476027386263012,
39
- "num_tokens": 245760.0,
40
  "step": 30
41
  },
42
  {
43
- "entropy": 1.0845661748200655,
44
  "epoch": 0.3309203722854188,
45
- "grad_norm": 1.3671875,
46
  "learning_rate": 7.800000000000001e-05,
47
- "loss": 22.017848205566406,
48
- "mean_token_accuracy": 0.7083170266821981,
49
- "num_tokens": 327680.0,
50
  "step": 40
51
  },
52
  {
53
- "entropy": 1.1636322166770696,
54
  "epoch": 0.4136504653567735,
55
- "grad_norm": 0.703125,
56
  "learning_rate": 9.8e-05,
57
- "loss": 17.47879638671875,
58
- "mean_token_accuracy": 0.7332558700814843,
59
- "num_tokens": 409600.0,
60
  "step": 50
61
  },
62
  {
63
- "entropy": 0.9551631901413202,
64
  "epoch": 0.4963805584281282,
65
- "grad_norm": 0.40625,
66
  "learning_rate": 0.000118,
67
- "loss": 15.09481201171875,
68
- "mean_token_accuracy": 0.7555772982537746,
69
- "num_tokens": 491520.0,
70
  "step": 60
71
  },
72
  {
73
- "entropy": 0.8048430571332574,
74
  "epoch": 0.5791106514994829,
75
- "grad_norm": 0.375,
76
  "learning_rate": 0.000138,
77
- "loss": 13.297686767578124,
78
- "mean_token_accuracy": 0.7774828754365444,
79
- "num_tokens": 573440.0,
80
  "step": 70
81
  },
82
  {
83
- "entropy": 0.8100443260744215,
84
  "epoch": 0.6618407445708376,
85
- "grad_norm": 0.4609375,
86
  "learning_rate": 0.00015800000000000002,
87
- "loss": 12.752572631835937,
88
- "mean_token_accuracy": 0.7837084107100963,
89
- "num_tokens": 655360.0,
90
  "step": 80
91
  },
92
  {
93
- "entropy": 0.7172152267768979,
94
  "epoch": 0.7445708376421923,
95
- "grad_norm": 2.1875,
96
  "learning_rate": 0.00017800000000000002,
97
- "loss": 11.629959106445312,
98
- "mean_token_accuracy": 0.799449609220028,
99
- "num_tokens": 737280.0,
100
  "step": 90
101
  },
102
  {
103
- "entropy": 0.7284062243998051,
104
  "epoch": 0.827300930713547,
105
- "grad_norm": 0.40625,
106
  "learning_rate": 0.00019800000000000002,
107
- "loss": 11.506278991699219,
108
- "mean_token_accuracy": 0.8022871781140566,
109
- "num_tokens": 819200.0,
110
  "step": 100
111
  },
112
  {
113
- "entropy": 0.6922262106090784,
114
  "epoch": 0.9100310237849017,
115
- "grad_norm": 0.341796875,
116
  "learning_rate": 0.00019942266891397815,
117
- "loss": 11.149666595458985,
118
- "mean_token_accuracy": 0.8068982377648354,
119
- "num_tokens": 901120.0,
120
  "step": 110
121
  },
122
  {
123
- "entropy": 0.6608987387269736,
124
  "epoch": 0.9927611168562565,
125
- "grad_norm": 0.373046875,
126
  "learning_rate": 0.00019743551343638324,
127
- "loss": 10.666960906982421,
128
- "mean_token_accuracy": 0.8124388422816992,
129
- "num_tokens": 983040.0,
130
  "step": 120
131
  },
132
  {
133
  "epoch": 1.0,
134
- "eval_entropy": 0.6862195637336997,
135
- "eval_loss": 0.6695265769958496,
136
- "eval_mean_token_accuracy": 0.8135074851124786,
137
- "eval_num_tokens": 990208.0,
138
- "eval_runtime": 255.0413,
139
- "eval_samples_per_second": 0.843,
140
- "eval_steps_per_second": 0.843,
141
  "step": 121
142
  },
143
  {
144
- "entropy": 0.6788679953617386,
145
  "epoch": 1.0744570837642193,
146
- "grad_norm": 0.3984375,
147
  "learning_rate": 0.00019405971991583108,
148
- "loss": 10.533837127685548,
149
- "mean_token_accuracy": 0.8129133717923225,
150
- "num_tokens": 1063936.0,
151
  "step": 130
152
  },
153
  {
154
- "entropy": 0.5800832805223763,
155
  "epoch": 1.157187176835574,
156
- "grad_norm": 0.333984375,
157
  "learning_rate": 0.00018934339971482674,
158
- "loss": 9.498150634765626,
159
- "mean_token_accuracy": 0.8281555753201246,
160
- "num_tokens": 1145856.0,
161
  "step": 140
162
  },
163
  {
164
- "entropy": 0.6344770405441522,
165
  "epoch": 1.2399172699069285,
166
- "grad_norm": 0.388671875,
167
  "learning_rate": 0.00018335376920472097,
168
- "loss": 10.217367553710938,
169
- "mean_token_accuracy": 0.8195327781140804,
170
- "num_tokens": 1227776.0,
171
  "step": 150
172
  },
173
  {
174
- "entropy": 0.6310219537466765,
175
  "epoch": 1.3226473629782833,
176
- "grad_norm": 0.380859375,
177
  "learning_rate": 0.00017617619180688085,
178
- "loss": 10.081737518310547,
179
- "mean_token_accuracy": 0.8219178050756455,
180
- "num_tokens": 1309696.0,
181
  "step": 160
182
  },
183
  {
184
- "entropy": 0.5863334746100008,
185
  "epoch": 1.4053774560496382,
186
- "grad_norm": 0.341796875,
187
  "learning_rate": 0.00016791296140450545,
188
- "loss": 9.392319488525391,
189
- "mean_token_accuracy": 0.8319227002561093,
190
- "num_tokens": 1391616.0,
191
  "step": 170
192
  },
193
  {
194
- "entropy": 0.6232900662347675,
195
  "epoch": 1.4881075491209927,
196
- "grad_norm": 0.44921875,
197
  "learning_rate": 0.0001586818444637402,
198
- "loss": 10.051438140869141,
199
- "mean_token_accuracy": 0.8215264175087214,
200
- "num_tokens": 1473536.0,
201
  "step": 180
202
  },
203
  {
204
- "entropy": 0.6163463215343654,
205
  "epoch": 1.5708376421923473,
206
- "grad_norm": 0.384765625,
207
  "learning_rate": 0.0001486144016415862,
208
- "loss": 9.878226470947265,
209
- "mean_token_accuracy": 0.8220768082886935,
210
- "num_tokens": 1555456.0,
211
  "step": 190
212
  },
213
  {
214
- "entropy": 0.588023800123483,
215
  "epoch": 1.6535677352637022,
216
- "grad_norm": 0.3515625,
217
  "learning_rate": 0.00013785411280082746,
218
- "loss": 9.45407943725586,
219
- "mean_token_accuracy": 0.8305283710360527,
220
- "num_tokens": 1637376.0,
221
  "step": 200
222
  },
223
  {
224
- "entropy": 0.599842881783843,
225
  "epoch": 1.736297828335057,
226
- "grad_norm": 0.37890625,
227
  "learning_rate": 0.00012655433215401438,
228
- "loss": 9.548422241210938,
229
- "mean_token_accuracy": 0.8284735765308142,
230
- "num_tokens": 1719296.0,
231
  "step": 210
232
  },
233
  {
234
- "entropy": 0.6552030782215297,
235
  "epoch": 1.8190279214064116,
236
- "grad_norm": 0.361328125,
237
  "learning_rate": 0.00011487610267952142,
238
- "loss": 10.46890640258789,
239
- "mean_token_accuracy": 0.8134295467287302,
240
- "num_tokens": 1801216.0,
241
  "step": 220
242
  },
243
  {
244
- "entropy": 0.5984975789207965,
245
  "epoch": 1.9017580144777662,
246
- "grad_norm": 0.353515625,
247
  "learning_rate": 0.00010298586095833151,
248
- "loss": 9.603475952148438,
249
- "mean_token_accuracy": 0.827079250663519,
250
- "num_tokens": 1883136.0,
251
  "step": 230
252
  },
253
  {
254
- "entropy": 0.5947112645488233,
255
  "epoch": 1.984488107549121,
256
- "grad_norm": 0.64453125,
257
  "learning_rate": 9.10530651419099e-05,
258
- "loss": 9.561953735351562,
259
- "mean_token_accuracy": 0.8265655554831028,
260
- "num_tokens": 1965056.0,
261
  "step": 240
262
  },
263
  {
264
  "epoch": 2.0,
265
- "eval_entropy": 0.6100467269503793,
266
- "eval_loss": 0.6102388501167297,
267
- "eval_mean_token_accuracy": 0.8254676164582718,
268
- "eval_num_tokens": 1980416.0,
269
- "eval_runtime": 254.828,
270
- "eval_samples_per_second": 0.844,
271
- "eval_steps_per_second": 0.844,
272
  "step": 242
273
  },
274
  {
275
- "entropy": 0.5080371947511088,
276
  "epoch": 2.066184074457084,
277
- "grad_norm": 0.453125,
278
  "learning_rate": 7.924777985705556e-05,
279
- "loss": 8.056553649902344,
280
- "mean_token_accuracy": 0.8497857213774814,
281
- "num_tokens": 2045952.0,
282
  "step": 250
283
  },
284
  {
285
- "entropy": 0.5341692148707807,
286
  "epoch": 2.1489141675284387,
287
- "grad_norm": 0.384765625,
288
  "learning_rate": 6.773825246734622e-05,
289
- "loss": 8.356841278076171,
290
- "mean_token_accuracy": 0.8431262206286192,
291
- "num_tokens": 2127872.0,
292
  "step": 260
293
  },
294
  {
295
- "entropy": 0.5629857819527387,
296
  "epoch": 2.231644260599793,
297
- "grad_norm": 0.328125,
298
  "learning_rate": 5.668851523397829e-05,
299
- "loss": 9.067486572265626,
300
- "mean_token_accuracy": 0.8315435405820608,
301
- "num_tokens": 2209792.0,
302
  "step": 270
303
  },
304
  {
305
- "entropy": 0.5280973493587225,
306
  "epoch": 2.314374353671148,
307
- "grad_norm": 0.361328125,
308
  "learning_rate": 4.625604754968839e-05,
309
- "loss": 8.390058135986328,
310
- "mean_token_accuracy": 0.8423923663794994,
311
- "num_tokens": 2291712.0,
312
  "step": 280
313
  },
314
  {
315
- "entropy": 0.5421305931173265,
316
  "epoch": 2.3971044467425027,
317
- "grad_norm": 0.353515625,
318
  "learning_rate": 3.658953156328857e-05,
319
- "loss": 8.713886260986328,
320
- "mean_token_accuracy": 0.8375489212572574,
321
- "num_tokens": 2373632.0,
322
  "step": 290
323
  },
324
  {
325
- "entropy": 0.5257686520460993,
326
  "epoch": 2.479834539813857,
327
- "grad_norm": 0.373046875,
328
  "learning_rate": 2.7826733181357932e-05,
329
- "loss": 8.388682556152343,
330
- "mean_token_accuracy": 0.8447284691035748,
331
- "num_tokens": 2455552.0,
332
  "step": 300
333
  },
334
  {
335
- "entropy": 0.5735760541632772,
336
  "epoch": 2.562564632885212,
337
- "grad_norm": 0.421875,
338
  "learning_rate": 2.0092538646774072e-05,
339
- "loss": 9.259294891357422,
340
- "mean_token_accuracy": 0.8287671197205781,
341
- "num_tokens": 2537472.0,
342
  "step": 310
343
  },
344
  {
345
- "entropy": 0.5352369678206742,
346
  "epoch": 2.6452947259565667,
347
- "grad_norm": 0.369140625,
348
  "learning_rate": 1.3497174676506674e-05,
349
- "loss": 8.547685241699218,
350
- "mean_token_accuracy": 0.8413160435855389,
351
- "num_tokens": 2619392.0,
352
  "step": 320
353
  },
354
  {
355
- "entropy": 0.540962244477123,
356
  "epoch": 2.7280248190279215,
357
- "grad_norm": 0.365234375,
358
  "learning_rate": 8.134637525034839e-06,
359
- "loss": 8.591437530517577,
360
- "mean_token_accuracy": 0.838882090896368,
361
- "num_tokens": 2701312.0,
362
  "step": 330
363
  },
364
  {
365
- "entropy": 0.5567054254934192,
366
  "epoch": 2.8107549120992763,
367
- "grad_norm": 0.353515625,
368
  "learning_rate": 4.081353362167406e-06,
369
- "loss": 8.788534545898438,
370
- "mean_token_accuracy": 0.8374510746449232,
371
- "num_tokens": 2783232.0,
372
  "step": 340
373
  },
374
  {
375
- "entropy": 0.5575114467181266,
376
  "epoch": 2.8934850051706307,
377
- "grad_norm": 0.35546875,
378
  "learning_rate": 1.3950890573852126e-06,
379
- "loss": 8.935771179199218,
380
- "mean_token_accuracy": 0.8345768079161644,
381
- "num_tokens": 2865152.0,
382
  "step": 350
383
  },
384
  {
385
- "entropy": 0.5235911178402602,
386
  "epoch": 2.9762150982419855,
387
- "grad_norm": 0.36328125,
388
  "learning_rate": 1.1412889406192673e-07,
389
- "loss": 8.273484039306641,
390
- "mean_token_accuracy": 0.8450831711292267,
391
- "num_tokens": 2947072.0,
392
  "step": 360
393
  },
394
  {
395
  "epoch": 3.0,
396
- "eval_entropy": 0.5448623623265777,
397
- "eval_loss": 0.6060348153114319,
398
- "eval_mean_token_accuracy": 0.8278159535208414,
399
- "eval_num_tokens": 2970624.0,
400
- "eval_runtime": 254.9017,
401
- "eval_samples_per_second": 0.843,
402
- "eval_steps_per_second": 0.843,
403
  "step": 363
404
  }
405
  ],
@@ -420,7 +420,7 @@
420
  "attributes": {}
421
  }
422
  },
423
- "total_flos": 4.471321806592082e+17,
424
  "train_batch_size": 1,
425
  "trial_name": null,
426
  "trial_params": null
 
1
  {
2
  "best_global_step": 363,
3
+ "best_metric": 0.43587613105773926,
4
+ "best_model_checkpoint": "/workspace/gemma4-26b-securecode/checkpoint-363",
5
  "epoch": 3.0,
6
  "eval_steps": 500,
7
  "global_step": 363,
 
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
+ "entropy": 1.0907821020111441,
14
  "epoch": 0.0827300930713547,
15
+ "grad_norm": 20.875,
16
  "learning_rate": 1.8e-05,
17
+ "loss": 80.26775512695312,
18
+ "mean_token_accuracy": 0.4542873948812485,
19
+ "num_tokens": 326185.0,
20
  "step": 10
21
  },
22
  {
23
+ "entropy": 0.8271314173936843,
24
  "epoch": 0.1654601861427094,
25
+ "grad_norm": 8.75,
26
  "learning_rate": 3.8e-05,
27
+ "loss": 58.08096923828125,
28
+ "mean_token_accuracy": 0.5611657274886965,
29
+ "num_tokens": 653865.0,
30
  "step": 20
31
  },
32
  {
33
+ "entropy": 0.4787554959766567,
34
  "epoch": 0.2481902792140641,
35
+ "grad_norm": 1.7109375,
36
  "learning_rate": 5.8e-05,
37
+ "loss": 25.493240356445312,
38
+ "mean_token_accuracy": 0.7378443486988544,
39
+ "num_tokens": 981337.0,
40
  "step": 30
41
  },
42
  {
43
+ "entropy": 0.7855595085769892,
44
  "epoch": 0.3309203722854188,
45
+ "grad_norm": 0.8671875,
46
  "learning_rate": 7.800000000000001e-05,
47
+ "loss": 14.629072570800782,
48
+ "mean_token_accuracy": 0.7917733617126942,
49
+ "num_tokens": 1308584.0,
50
  "step": 40
51
  },
52
  {
53
+ "entropy": 0.7569877350702882,
54
  "epoch": 0.4136504653567735,
55
+ "grad_norm": 2.109375,
56
  "learning_rate": 9.8e-05,
57
+ "loss": 12.609142303466797,
58
+ "mean_token_accuracy": 0.8013272784650326,
59
+ "num_tokens": 1635098.0,
60
  "step": 50
61
  },
62
  {
63
+ "entropy": 0.6735223602503538,
64
  "epoch": 0.4963805584281282,
65
+ "grad_norm": 16.875,
66
  "learning_rate": 0.000118,
67
+ "loss": 10.704925537109375,
68
+ "mean_token_accuracy": 0.8209844313561916,
69
+ "num_tokens": 1962302.0,
70
  "step": 60
71
  },
72
  {
73
+ "entropy": 0.6005677949637175,
74
  "epoch": 0.5791106514994829,
75
+ "grad_norm": 1.546875,
76
  "learning_rate": 0.000138,
77
+ "loss": 9.783185577392578,
78
+ "mean_token_accuracy": 0.8308866504579783,
79
+ "num_tokens": 2289982.0,
80
  "step": 70
81
  },
82
  {
83
+ "entropy": 0.5877057909965515,
84
  "epoch": 0.6618407445708376,
85
+ "grad_norm": 11.25,
86
  "learning_rate": 0.00015800000000000002,
87
+ "loss": 9.298844909667968,
88
+ "mean_token_accuracy": 0.8359990835189819,
89
+ "num_tokens": 2616786.0,
90
  "step": 80
91
  },
92
  {
93
+ "entropy": 0.5447238819673658,
94
  "epoch": 0.7445708376421923,
95
+ "grad_norm": 1.2890625,
96
  "learning_rate": 0.00017800000000000002,
97
+ "loss": 8.777264404296876,
98
+ "mean_token_accuracy": 0.8440194871276617,
99
+ "num_tokens": 2941975.0,
100
  "step": 90
101
  },
102
  {
103
+ "entropy": 0.5323287105187774,
104
  "epoch": 0.827300930713547,
105
+ "grad_norm": 0.70703125,
106
  "learning_rate": 0.00019800000000000002,
107
+ "loss": 8.489185333251953,
108
+ "mean_token_accuracy": 0.8486687760800123,
109
+ "num_tokens": 3269655.0,
110
  "step": 100
111
  },
112
  {
113
+ "entropy": 0.4949887519702315,
114
  "epoch": 0.9100310237849017,
115
+ "grad_norm": 0.439453125,
116
  "learning_rate": 0.00019942266891397815,
117
+ "loss": 8.192723083496094,
118
+ "mean_token_accuracy": 0.8528529018163681,
119
+ "num_tokens": 3595193.0,
120
  "step": 110
121
  },
122
  {
123
+ "entropy": 0.4980895221233368,
124
  "epoch": 0.9927611168562565,
125
+ "grad_norm": 0.921875,
126
  "learning_rate": 0.00019743551343638324,
127
+ "loss": 7.908926391601563,
128
+ "mean_token_accuracy": 0.8567473825067282,
129
+ "num_tokens": 3922475.0,
130
  "step": 120
131
  },
132
  {
133
  "epoch": 1.0,
134
+ "eval_entropy": 0.5629051625728607,
135
+ "eval_loss": 0.4985087513923645,
136
+ "eval_mean_token_accuracy": 0.8571729124978531,
137
+ "eval_num_tokens": 3949618.0,
138
+ "eval_runtime": 122.3216,
139
+ "eval_samples_per_second": 1.758,
140
+ "eval_steps_per_second": 1.758,
141
  "step": 121
142
  },
143
  {
144
+ "entropy": 0.5185692432937743,
145
  "epoch": 1.0744570837642193,
146
+ "grad_norm": 0.37890625,
147
  "learning_rate": 0.00019405971991583108,
148
+ "loss": 7.807546997070313,
149
+ "mean_token_accuracy": 0.8577670869947989,
150
+ "num_tokens": 4244530.0,
151
  "step": 130
152
  },
153
  {
154
+ "entropy": 0.4555334035307169,
155
  "epoch": 1.157187176835574,
156
+ "grad_norm": 0.1953125,
157
  "learning_rate": 0.00018934339971482674,
158
+ "loss": 7.464869689941406,
159
+ "mean_token_accuracy": 0.8638800706714391,
160
+ "num_tokens": 4572210.0,
161
  "step": 140
162
  },
163
  {
164
+ "entropy": 0.47754106651991607,
165
  "epoch": 1.2399172699069285,
166
+ "grad_norm": 1.21875,
167
  "learning_rate": 0.00018335376920472097,
168
+ "loss": 7.764054870605468,
169
+ "mean_token_accuracy": 0.8579531148076057,
170
+ "num_tokens": 4897694.0,
171
  "step": 150
172
  },
173
  {
174
+ "entropy": 0.4550897226668894,
175
  "epoch": 1.3226473629782833,
176
+ "grad_norm": 0.28515625,
177
  "learning_rate": 0.00017617619180688085,
178
+ "loss": 7.322466278076172,
179
+ "mean_token_accuracy": 0.8654993120580912,
180
+ "num_tokens": 5223716.0,
181
  "step": 160
182
  },
183
  {
184
+ "entropy": 0.4635292864404619,
185
  "epoch": 1.4053774560496382,
186
+ "grad_norm": 0.8203125,
187
  "learning_rate": 0.00016791296140450545,
188
+ "loss": 7.322091674804687,
189
+ "mean_token_accuracy": 0.8659177150577306,
190
+ "num_tokens": 5550963.0,
191
  "step": 170
192
  },
193
  {
194
+ "entropy": 0.46733071468770504,
195
  "epoch": 1.4881075491209927,
196
+ "grad_norm": 0.45703125,
197
  "learning_rate": 0.0001586818444637402,
198
+ "loss": 7.3169700622558596,
199
+ "mean_token_accuracy": 0.8659562785178423,
200
+ "num_tokens": 5878643.0,
201
  "step": 180
202
  },
203
  {
204
+ "entropy": 0.4562882795929909,
205
  "epoch": 1.5708376421923473,
206
+ "grad_norm": 0.302734375,
207
  "learning_rate": 0.0001486144016415862,
208
+ "loss": 7.291434478759766,
209
+ "mean_token_accuracy": 0.8656417932361364,
210
+ "num_tokens": 6206323.0,
211
  "step": 190
212
  },
213
  {
214
+ "entropy": 0.4341404400765896,
215
  "epoch": 1.6535677352637022,
216
+ "grad_norm": 0.30859375,
217
  "learning_rate": 0.00013785411280082746,
218
+ "loss": 6.904853057861328,
219
+ "mean_token_accuracy": 0.8713251128792763,
220
+ "num_tokens": 6533527.0,
221
  "step": 200
222
  },
223
  {
224
+ "entropy": 0.4494163889437914,
225
  "epoch": 1.736297828335057,
226
+ "grad_norm": 0.2177734375,
227
  "learning_rate": 0.00012655433215401438,
228
+ "loss": 7.195234680175782,
229
+ "mean_token_accuracy": 0.8673424527049065,
230
+ "num_tokens": 6861207.0,
231
  "step": 210
232
  },
233
  {
234
+ "entropy": 0.46514057284221055,
235
  "epoch": 1.8190279214064116,
236
+ "grad_norm": 0.220703125,
237
  "learning_rate": 0.00011487610267952142,
238
+ "loss": 7.431344604492187,
239
+ "mean_token_accuracy": 0.8633232209831476,
240
+ "num_tokens": 7188281.0,
241
  "step": 220
242
  },
243
  {
244
+ "entropy": 0.43800092255696654,
245
  "epoch": 1.9017580144777662,
246
+ "grad_norm": 0.1962890625,
247
  "learning_rate": 0.00010298586095833151,
248
+ "loss": 7.023017883300781,
249
+ "mean_token_accuracy": 0.8693898901343345,
250
+ "num_tokens": 7513788.0,
251
  "step": 230
252
  },
253
  {
254
+ "entropy": 0.44280060222372414,
255
  "epoch": 1.984488107549121,
256
+ "grad_norm": 0.453125,
257
  "learning_rate": 9.10530651419099e-05,
258
+ "loss": 7.070135498046875,
259
+ "mean_token_accuracy": 0.8684426795691251,
260
+ "num_tokens": 7839097.0,
261
  "step": 240
262
  },
263
  {
264
  "epoch": 2.0,
265
+ "eval_entropy": 0.43673129948072653,
266
+ "eval_loss": 0.44326454401016235,
267
+ "eval_mean_token_accuracy": 0.8689799866010977,
268
+ "eval_num_tokens": 7899236.0,
269
+ "eval_runtime": 122.7611,
270
+ "eval_samples_per_second": 1.751,
271
+ "eval_steps_per_second": 1.751,
272
  "step": 242
273
  },
274
  {
275
+ "entropy": 0.39076420287542707,
276
  "epoch": 2.066184074457084,
277
+ "grad_norm": 0.2421875,
278
  "learning_rate": 7.924777985705556e-05,
279
+ "loss": 6.180745697021484,
280
+ "mean_token_accuracy": 0.8815464007703564,
281
+ "num_tokens": 8159427.0,
282
  "step": 250
283
  },
284
  {
285
+ "entropy": 0.4168915188405663,
286
  "epoch": 2.1489141675284387,
287
+ "grad_norm": 0.1962890625,
288
  "learning_rate": 6.773825246734622e-05,
289
+ "loss": 6.647556304931641,
290
+ "mean_token_accuracy": 0.8745267510414123,
291
+ "num_tokens": 8487107.0,
292
  "step": 260
293
  },
294
  {
295
+ "entropy": 0.42043258836492897,
296
  "epoch": 2.231644260599793,
297
+ "grad_norm": 0.2041015625,
298
  "learning_rate": 5.668851523397829e-05,
299
+ "loss": 6.715694427490234,
300
+ "mean_token_accuracy": 0.8731884736567735,
301
+ "num_tokens": 8814579.0,
302
  "step": 270
303
  },
304
  {
305
+ "entropy": 0.4163642665371299,
306
  "epoch": 2.314374353671148,
307
+ "grad_norm": 0.1884765625,
308
  "learning_rate": 4.625604754968839e-05,
309
+ "loss": 6.608394622802734,
310
+ "mean_token_accuracy": 0.8757493741810322,
311
+ "num_tokens": 9140560.0,
312
  "step": 280
313
  },
314
  {
315
+ "entropy": 0.44139298899099233,
316
  "epoch": 2.3971044467425027,
317
+ "grad_norm": 0.3515625,
318
  "learning_rate": 3.658953156328857e-05,
319
+ "loss": 7.034827423095703,
320
+ "mean_token_accuracy": 0.8671251021325588,
321
+ "num_tokens": 9466711.0,
322
  "step": 290
323
  },
324
  {
325
+ "entropy": 0.40197266899049283,
326
  "epoch": 2.479834539813857,
327
+ "grad_norm": 1.7109375,
328
  "learning_rate": 2.7826733181357932e-05,
329
+ "loss": 6.454815673828125,
330
+ "mean_token_accuracy": 0.8788287751376629,
331
+ "num_tokens": 9794391.0,
332
  "step": 300
333
  },
334
  {
335
+ "entropy": 0.4310479393228889,
336
  "epoch": 2.562564632885212,
337
+ "grad_norm": 0.388671875,
338
  "learning_rate": 2.0092538646774072e-05,
339
+ "loss": 6.927095031738281,
340
+ "mean_token_accuracy": 0.869931609556079,
341
+ "num_tokens": 10122071.0,
342
  "step": 310
343
  },
344
  {
345
+ "entropy": 0.4207622425630689,
346
  "epoch": 2.6452947259565667,
347
+ "grad_norm": 0.30078125,
348
  "learning_rate": 1.3497174676506674e-05,
349
+ "loss": 6.691123962402344,
350
+ "mean_token_accuracy": 0.8738963160663843,
351
+ "num_tokens": 10448117.0,
352
  "step": 320
353
  },
354
  {
355
+ "entropy": 0.4212987683247775,
356
  "epoch": 2.7280248190279215,
357
+ "grad_norm": 0.275390625,
358
  "learning_rate": 8.134637525034839e-06,
359
+ "loss": 6.729954528808594,
360
+ "mean_token_accuracy": 0.8721410397440195,
361
+ "num_tokens": 10774291.0,
362
  "step": 330
363
  },
364
  {
365
+ "entropy": 0.41487495419569315,
366
  "epoch": 2.8107549120992763,
367
+ "grad_norm": 0.271484375,
368
  "learning_rate": 4.081353362167406e-06,
369
+ "loss": 6.5751182556152346,
370
+ "mean_token_accuracy": 0.8759495642036199,
371
+ "num_tokens": 11101971.0,
372
  "step": 340
373
  },
374
  {
375
+ "entropy": 0.4179635870270431,
376
  "epoch": 2.8934850051706307,
377
+ "grad_norm": 0.63671875,
378
  "learning_rate": 1.3950890573852126e-06,
379
+ "loss": 6.681074523925782,
380
+ "mean_token_accuracy": 0.8738733492791653,
381
+ "num_tokens": 11429651.0,
382
  "step": 350
383
  },
384
  {
385
+ "entropy": 0.40078685265034436,
386
  "epoch": 2.9762150982419855,
387
+ "grad_norm": 0.267578125,
388
  "learning_rate": 1.1412889406192673e-07,
389
+ "loss": 6.333649444580078,
390
+ "mean_token_accuracy": 0.8798086743801832,
391
+ "num_tokens": 11754646.0,
392
  "step": 360
393
  },
394
  {
395
  "epoch": 3.0,
396
+ "eval_entropy": 0.41775747471770575,
397
+ "eval_loss": 0.43587613105773926,
398
+ "eval_mean_token_accuracy": 0.8707626212474912,
399
+ "eval_num_tokens": 11848854.0,
400
+ "eval_runtime": 122.5213,
401
+ "eval_samples_per_second": 1.755,
402
+ "eval_steps_per_second": 1.755,
403
  "step": 363
404
  }
405
  ],
 
420
  "attributes": {}
421
  }
422
  },
423
+ "total_flos": 1.7834649983749484e+18,
424
  "train_batch_size": 1,
425
  "trial_name": null,
426
  "trial_params": null
checkpoint-363/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2eaf4e1eba101412810b250e27914b2df87f93b0a9c62028451f50813e692b8e
3
  size 5713
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bdf412780aad6b6bc055248dd1640e0d1a2282e1c11f28390eac7fae5fae303
3
  size 5713
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2eaf4e1eba101412810b250e27914b2df87f93b0a9c62028451f50813e692b8e
3
  size 5713
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bdf412780aad6b6bc055248dd1640e0d1a2282e1c11f28390eac7fae5fae303
3
  size 5713