Upload folder using huggingface_hub
Browse files- README.md +38 -157
- adapter_config.json +180 -180
- adapter_model.safetensors +1 -1
- checkpoint-121/adapter_config.json +180 -180
- checkpoint-121/adapter_model.safetensors +1 -1
- checkpoint-121/optimizer.pt +2 -2
- checkpoint-121/rng_state.pth +1 -1
- checkpoint-121/scheduler.pt +1 -1
- checkpoint-121/tokenizer.json +2 -2
- checkpoint-121/tokenizer_config.json +1 -1
- checkpoint-121/trainer_state.json +70 -70
- checkpoint-121/training_args.bin +1 -1
- checkpoint-242/adapter_config.json +180 -180
- checkpoint-242/adapter_model.safetensors +1 -1
- checkpoint-242/optimizer.pt +2 -2
- checkpoint-242/rng_state.pth +1 -1
- checkpoint-242/scheduler.pt +1 -1
- checkpoint-242/tokenizer.json +2 -2
- checkpoint-242/tokenizer_config.json +1 -1
- checkpoint-242/trainer_state.json +137 -137
- checkpoint-242/training_args.bin +1 -1
- checkpoint-363/adapter_config.json +180 -180
- checkpoint-363/adapter_model.safetensors +1 -1
- checkpoint-363/optimizer.pt +2 -2
- checkpoint-363/rng_state.pth +1 -1
- checkpoint-363/scheduler.pt +1 -1
- checkpoint-363/tokenizer.json +2 -2
- checkpoint-363/tokenizer_config.json +1 -1
- checkpoint-363/trainer_state.json +204 -204
- checkpoint-363/training_args.bin +1 -1
- training_args.bin +1 -1
README.md
CHANGED
|
@@ -1,181 +1,62 @@
|
|
| 1 |
---
|
| 2 |
-
library_name: peft
|
| 3 |
-
license: gemma
|
| 4 |
base_model: google/gemma-4-26b-a4b-it
|
|
|
|
|
|
|
| 5 |
tags:
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
- owasp
|
| 13 |
-
- ai-security
|
| 14 |
-
datasets:
|
| 15 |
-
- scthornton/securecode
|
| 16 |
-
- scthornton/securecode-web
|
| 17 |
pipeline_tag: text-generation
|
| 18 |
-
model-index:
|
| 19 |
-
- name: gemma4-26b-securecode
|
| 20 |
-
results: []
|
| 21 |
---
|
| 22 |
|
| 23 |
-
#
|
| 24 |
-
|
| 25 |
-
**Security-specialized code generation model** fine-tuned on the [SecureCode](https://huggingface.co/datasets/scthornton/securecode) and [SecureCode Web](https://huggingface.co/datasets/scthornton/securecode-web) datasets.
|
| 26 |
-
|
| 27 |
-
Part of the [SecureCode model collection](https://huggingface.co/collections/scthornton/securecode) by [perfecXion.ai](https://perfecxion.ai).
|
| 28 |
-
|
| 29 |
-
## Model Details
|
| 30 |
-
|
| 31 |
-
| Property | Value |
|
| 32 |
-
|----------|-------|
|
| 33 |
-
| **Base Model** | [google/gemma-4-26b-a4b-it](https://huggingface.co/google/gemma-4-26b-a4b-it) |
|
| 34 |
-
| **Architecture** | Gemma 4 Mixture-of-Experts (26B total, 4B active per token) |
|
| 35 |
-
| **Method** | QLoRA (4-bit NormalFloat quantization) |
|
| 36 |
-
| **Parameters Trained** | ~1-2% via LoRA adapters |
|
| 37 |
-
| **Tier** | Tier 3: Large Security Specialist |
|
| 38 |
-
|
| 39 |
-
## Training Configuration
|
| 40 |
-
|
| 41 |
-
### QLoRA Settings
|
| 42 |
-
|
| 43 |
-
| Parameter | Value |
|
| 44 |
-
|-----------|-------|
|
| 45 |
-
| Quantization | 4-bit NormalFloat (NF4) |
|
| 46 |
-
| Compute Dtype | bfloat16 |
|
| 47 |
-
| Double Quantization | Enabled |
|
| 48 |
-
| LoRA Rank | 16 |
|
| 49 |
-
| LoRA Alpha | 32 |
|
| 50 |
-
| LoRA Dropout | 0.05 |
|
| 51 |
-
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
|
| 52 |
-
|
| 53 |
-
### Training Hyperparameters
|
| 54 |
-
|
| 55 |
-
| Parameter | Value |
|
| 56 |
-
|-----------|-------|
|
| 57 |
-
| Learning Rate | 2e-4 |
|
| 58 |
-
| LR Scheduler | Cosine with 100-step warmup |
|
| 59 |
-
| Epochs | 3 |
|
| 60 |
-
| Per-device Batch Size | 2 |
|
| 61 |
-
| Gradient Accumulation | 8x |
|
| 62 |
-
| Effective Batch Size | 16 |
|
| 63 |
-
| Max Sequence Length | 4,096 tokens |
|
| 64 |
-
| Optimizer | paged_adamw_8bit |
|
| 65 |
-
| Precision | bf16 |
|
| 66 |
-
|
| 67 |
-
### Hardware
|
| 68 |
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
| System | NVIDIA DGX Spark |
|
| 72 |
-
| GPU | NVIDIA GB10 |
|
| 73 |
-
| Memory | 128 GB Unified (CPU/GPU) |
|
| 74 |
|
| 75 |
-
##
|
| 76 |
-
|
| 77 |
-
Combined and deduplicated from two datasets:
|
| 78 |
-
|
| 79 |
-
| Dataset | Examples | Focus |
|
| 80 |
-
|---------|----------|-------|
|
| 81 |
-
| [scthornton/securecode](https://huggingface.co/datasets/scthornton/securecode) | 2,185 | Web + AI/ML security (OWASP Top 10 2021 + LLM Top 10 2025) |
|
| 82 |
-
| [scthornton/securecode-web](https://huggingface.co/datasets/scthornton/securecode-web) | 1,378 | Web security with framework-specific patterns |
|
| 83 |
-
|
| 84 |
-
### Coverage
|
| 85 |
-
|
| 86 |
-
**Vulnerability Standards:**
|
| 87 |
-
- OWASP Top 10 2021 (Web/Application Security)
|
| 88 |
-
- OWASP LLM Top 10 2025 (AI/ML Security)
|
| 89 |
-
- 92+ CWEs mapped
|
| 90 |
-
|
| 91 |
-
**Programming Languages:** Python, JavaScript, Java, Go, PHP, TypeScript, C#, Ruby, Rust, Kotlin, YAML, HCL
|
| 92 |
-
|
| 93 |
-
**Frameworks:** 49+ including LangChain, OpenAI, Anthropic, HuggingFace, Django, Express.js, Spring Boot, FastAPI, and more
|
| 94 |
-
|
| 95 |
-
**Training Format:** 4-turn conversational examples:
|
| 96 |
-
1. Developer asks about implementing a feature
|
| 97 |
-
2. Assistant provides vulnerable + secure implementations with attack demonstrations
|
| 98 |
-
3. Developer asks about testing and edge cases
|
| 99 |
-
4. Assistant delivers defense-in-depth operational guidance
|
| 100 |
-
|
| 101 |
-
Every example is grounded in real CVEs and published security incidents.
|
| 102 |
-
|
| 103 |
-
## Usage
|
| 104 |
|
| 105 |
```python
|
| 106 |
-
from
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
load_in_4bit=True,
|
| 113 |
-
bnb_4bit_quant_type="nf4",
|
| 114 |
-
bnb_4bit_compute_dtype=torch.bfloat16,
|
| 115 |
-
)
|
| 116 |
-
|
| 117 |
-
base_model = AutoModelForCausalLM.from_pretrained(
|
| 118 |
-
"google/gemma-4-26b-a4b-it",
|
| 119 |
-
quantization_config=bnb_config,
|
| 120 |
-
device_map="auto",
|
| 121 |
-
)
|
| 122 |
-
tokenizer = AutoTokenizer.from_pretrained("scthornton/gemma4-26b-securecode")
|
| 123 |
-
model = PeftModel.from_pretrained(base_model, "scthornton/gemma4-26b-securecode")
|
| 124 |
-
|
| 125 |
-
messages = [
|
| 126 |
-
{"role": "user", "content": "How do I implement JWT authentication with refresh tokens in Python?"}
|
| 127 |
-
]
|
| 128 |
-
|
| 129 |
-
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
|
| 130 |
-
outputs = model.generate(inputs, max_new_tokens=2048, temperature=0.7)
|
| 131 |
-
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 132 |
```
|
| 133 |
|
| 134 |
-
##
|
| 135 |
|
| 136 |
-
|
| 137 |
|
| 138 |
-
- Generate **secure implementations by default** with proper input validation, parameterized queries, and cryptographic best practices
|
| 139 |
-
- Provide **vulnerable AND secure** code side-by-side so developers understand the risk
|
| 140 |
-
- Include **defense-in-depth guidance**: logging, monitoring, SIEM integration, and infrastructure hardening
|
| 141 |
-
- Cover **AI/ML-specific vulnerabilities**: prompt injection defenses, RAG security, model supply chain protection
|
| 142 |
|
| 143 |
-
## SecureCode Model Collection
|
| 144 |
|
| 145 |
-
|
| 146 |
-
|-------|-----------|------|
|
| 147 |
-
| [llama-3.2-3b-securecode](https://huggingface.co/scthornton/llama-3.2-3b-securecode) | 3B | Llama 3.2 3B |
|
| 148 |
-
| [codegemma-7b-securecode](https://huggingface.co/scthornton/codegemma-7b-securecode) | 7B | CodeGemma 7B IT |
|
| 149 |
-
| [deepseek-coder-6.7b-securecode](https://huggingface.co/scthornton/deepseek-coder-6.7b-securecode) | 6.7B | DeepSeek Coder |
|
| 150 |
-
| [qwen-coder-7b-securecode](https://huggingface.co/scthornton/qwen-coder-7b-securecode) | 7B | Qwen Coder 7B |
|
| 151 |
-
| [codellama-13b-securecode](https://huggingface.co/scthornton/codellama-13b-securecode) | 13B | Code Llama 13B |
|
| 152 |
-
| [qwen2.5-coder-14b-securecode](https://huggingface.co/scthornton/qwen2.5-coder-14b-securecode) | 14B | Qwen 2.5 Coder 14B |
|
| 153 |
-
| [starcoder2-15b-securecode](https://huggingface.co/scthornton/starcoder2-15b-securecode) | 15B | StarCoder2 15B |
|
| 154 |
-
| [granite-20b-code-securecode](https://huggingface.co/scthornton/granite-20b-code-securecode) | 20B | Granite 20B Code |
|
| 155 |
-
| **gemma4-26b-securecode** | **26B (4B active)** | **Gemma 4 26B-A4B IT** |
|
| 156 |
|
| 157 |
-
##
|
| 158 |
|
| 159 |
-
-
|
| 160 |
-
-
|
| 161 |
-
-
|
| 162 |
-
-
|
|
|
|
|
|
|
| 163 |
|
| 164 |
-
##
|
| 165 |
|
| 166 |
-
- **Model:** Gemma license (inherited from base model)
|
| 167 |
-
- **Dataset:** CC BY-NC-SA 4.0
|
| 168 |
-
- **Adapters:** CC BY-NC-SA 4.0
|
| 169 |
|
| 170 |
-
## Citation
|
| 171 |
|
|
|
|
|
|
|
| 172 |
```bibtex
|
| 173 |
-
@
|
| 174 |
-
title={
|
| 175 |
-
author={
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
note={arXiv:2512.18542}
|
| 180 |
}
|
| 181 |
-
```
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
base_model: google/gemma-4-26b-a4b-it
|
| 3 |
+
library_name: peft
|
| 4 |
+
model_name: gemma4-26b-securecode
|
| 5 |
tags:
|
| 6 |
+
- base_model:adapter:google/gemma-4-26b-a4b-it
|
| 7 |
+
- lora
|
| 8 |
+
- sft
|
| 9 |
+
- transformers
|
| 10 |
+
- trl
|
| 11 |
+
licence: license
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Model Card for gemma4-26b-securecode
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
+
This model is a fine-tuned version of [google/gemma-4-26b-a4b-it](https://huggingface.co/google/gemma-4-26b-a4b-it).
|
| 18 |
+
It has been trained using [TRL](https://github.com/huggingface/trl).
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
+
## Quick start
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
```python
|
| 23 |
+
from transformers import pipeline
|
| 24 |
+
|
| 25 |
+
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
| 26 |
+
generator = pipeline("text-generation", model="None", device="cuda")
|
| 27 |
+
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
| 28 |
+
print(output["generated_text"])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
```
|
| 30 |
|
| 31 |
+
## Training procedure
|
| 32 |
|
| 33 |
+
|
| 34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
|
|
|
| 36 |
|
| 37 |
+
This model was trained with SFT.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
+
### Framework versions
|
| 40 |
|
| 41 |
+
- PEFT 0.18.2.dev0
|
| 42 |
+
- TRL: 1.0.0
|
| 43 |
+
- Transformers: 5.5.0
|
| 44 |
+
- Pytorch: 2.7.1+cu128
|
| 45 |
+
- Datasets: 4.8.4
|
| 46 |
+
- Tokenizers: 0.22.2
|
| 47 |
|
| 48 |
+
## Citations
|
| 49 |
|
|
|
|
|
|
|
|
|
|
| 50 |
|
|
|
|
| 51 |
|
| 52 |
+
Cite TRL as:
|
| 53 |
+
|
| 54 |
```bibtex
|
| 55 |
+
@software{vonwerra2020trl,
|
| 56 |
+
title = {{TRL: Transformers Reinforcement Learning}},
|
| 57 |
+
author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
|
| 58 |
+
license = {Apache-2.0},
|
| 59 |
+
url = {https://github.com/huggingface/trl},
|
| 60 |
+
year = {2020}
|
|
|
|
| 61 |
}
|
| 62 |
+
```
|
adapter_config.json
CHANGED
|
@@ -24,217 +24,217 @@
|
|
| 24 |
"megatron_core": "megatron.core",
|
| 25 |
"modules_to_save": null,
|
| 26 |
"peft_type": "LORA",
|
| 27 |
-
"peft_version": "0.18.2.dev0@
|
| 28 |
"qalora_group_size": 16,
|
| 29 |
"r": 16,
|
| 30 |
"rank_pattern": {},
|
| 31 |
"revision": null,
|
| 32 |
"target_modules": [
|
| 33 |
-
"model.language_model.layers.
|
| 34 |
-
"model.language_model.layers.
|
| 35 |
-
"model.language_model.layers.3.mlp.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
"model.language_model.layers.17.mlp.up_proj",
|
| 37 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
"model.language_model.layers.27.self_attn.k_proj",
|
| 39 |
-
"model.language_model.layers.28.mlp.down_proj",
|
| 40 |
-
"model.language_model.layers.6.mlp.up_proj",
|
| 41 |
-
"model.language_model.layers.24.self_attn.k_proj",
|
| 42 |
-
"model.language_model.layers.6.self_attn.q_proj",
|
| 43 |
-
"model.language_model.layers.17.self_attn.q_proj",
|
| 44 |
-
"model.language_model.layers.15.self_attn.k_proj",
|
| 45 |
-
"model.language_model.layers.24.mlp.up_proj",
|
| 46 |
-
"model.language_model.layers.19.mlp.gate_proj",
|
| 47 |
-
"model.language_model.layers.16.self_attn.k_proj",
|
| 48 |
-
"model.language_model.layers.26.self_attn.q_proj",
|
| 49 |
-
"model.language_model.layers.21.mlp.up_proj",
|
| 50 |
-
"model.language_model.layers.17.mlp.down_proj",
|
| 51 |
-
"model.language_model.layers.10.self_attn.v_proj",
|
| 52 |
-
"model.language_model.layers.25.mlp.down_proj",
|
| 53 |
-
"model.language_model.layers.11.mlp.up_proj",
|
| 54 |
-
"model.language_model.layers.2.self_attn.o_proj",
|
| 55 |
-
"model.language_model.layers.15.mlp.down_proj",
|
| 56 |
-
"model.language_model.layers.10.self_attn.k_proj",
|
| 57 |
-
"model.language_model.layers.15.self_attn.q_proj",
|
| 58 |
-
"model.language_model.layers.9.self_attn.v_proj",
|
| 59 |
-
"model.language_model.layers.27.self_attn.o_proj",
|
| 60 |
-
"model.language_model.layers.3.self_attn.v_proj",
|
| 61 |
-
"model.language_model.layers.10.self_attn.q_proj",
|
| 62 |
-
"model.language_model.layers.21.mlp.gate_proj",
|
| 63 |
-
"model.language_model.layers.25.self_attn.q_proj",
|
| 64 |
"model.language_model.layers.5.self_attn.o_proj",
|
|
|
|
|
|
|
| 65 |
"model.language_model.layers.2.mlp.gate_proj",
|
| 66 |
-
"model.language_model.layers.9.mlp.gate_proj",
|
| 67 |
-
"model.language_model.layers.19.self_attn.v_proj",
|
| 68 |
-
"model.language_model.layers.18.self_attn.k_proj",
|
| 69 |
-
"model.language_model.layers.19.mlp.down_proj",
|
| 70 |
-
"model.language_model.layers.23.self_attn.o_proj",
|
| 71 |
-
"model.language_model.layers.27.mlp.gate_proj",
|
| 72 |
-
"model.language_model.layers.0.mlp.up_proj",
|
| 73 |
-
"model.language_model.layers.20.mlp.gate_proj",
|
| 74 |
-
"model.language_model.layers.28.self_attn.o_proj",
|
| 75 |
-
"model.language_model.layers.4.self_attn.o_proj",
|
| 76 |
-
"model.language_model.layers.28.self_attn.v_proj",
|
| 77 |
-
"model.language_model.layers.11.self_attn.q_proj",
|
| 78 |
"model.language_model.layers.26.self_attn.o_proj",
|
| 79 |
-
"model.language_model.layers.9.mlp.down_proj",
|
| 80 |
-
"model.language_model.layers.27.self_attn.v_proj",
|
| 81 |
-
"model.language_model.layers.23.mlp.up_proj",
|
| 82 |
-
"model.language_model.layers.2.mlp.up_proj",
|
| 83 |
-
"model.language_model.layers.0.mlp.gate_proj",
|
| 84 |
-
"model.language_model.layers.18.self_attn.o_proj",
|
| 85 |
-
"model.language_model.layers.19.self_attn.k_proj",
|
| 86 |
-
"model.language_model.layers.10.mlp.down_proj",
|
| 87 |
-
"model.language_model.layers.10.mlp.gate_proj",
|
| 88 |
-
"model.language_model.layers.0.self_attn.o_proj",
|
| 89 |
-
"model.language_model.layers.20.mlp.down_proj",
|
| 90 |
-
"model.language_model.layers.10.self_attn.o_proj",
|
| 91 |
-
"model.language_model.layers.15.self_attn.o_proj",
|
| 92 |
-
"model.language_model.layers.18.mlp.down_proj",
|
| 93 |
-
"model.language_model.layers.1.self_attn.v_proj",
|
| 94 |
-
"model.language_model.layers.13.self_attn.q_proj",
|
| 95 |
-
"model.language_model.layers.18.self_attn.q_proj",
|
| 96 |
-
"model.language_model.layers.3.mlp.down_proj",
|
| 97 |
-
"model.language_model.layers.20.self_attn.k_proj",
|
| 98 |
-
"model.language_model.layers.14.self_attn.o_proj",
|
| 99 |
-
"model.language_model.layers.7.mlp.down_proj",
|
| 100 |
-
"model.language_model.layers.25.self_attn.v_proj",
|
| 101 |
-
"model.language_model.layers.29.mlp.gate_proj",
|
| 102 |
-
"model.language_model.layers.2.self_attn.k_proj",
|
| 103 |
-
"model.language_model.layers.5.self_attn.k_proj",
|
| 104 |
-
"model.language_model.layers.9.self_attn.k_proj",
|
| 105 |
-
"model.language_model.layers.1.mlp.gate_proj",
|
| 106 |
-
"model.language_model.layers.8.self_attn.o_proj",
|
| 107 |
-
"model.language_model.layers.22.self_attn.k_proj",
|
| 108 |
-
"model.language_model.layers.3.self_attn.q_proj",
|
| 109 |
-
"model.language_model.layers.23.self_attn.k_proj",
|
| 110 |
-
"model.language_model.layers.3.self_attn.k_proj",
|
| 111 |
-
"model.language_model.layers.19.self_attn.q_proj",
|
| 112 |
-
"model.language_model.layers.18.self_attn.v_proj",
|
| 113 |
-
"model.language_model.layers.10.mlp.up_proj",
|
| 114 |
-
"model.language_model.layers.11.mlp.gate_proj",
|
| 115 |
-
"model.language_model.layers.1.mlp.up_proj",
|
| 116 |
-
"model.language_model.layers.18.mlp.gate_proj",
|
| 117 |
-
"model.language_model.layers.8.mlp.gate_proj",
|
| 118 |
"model.language_model.layers.7.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119 |
"model.language_model.layers.8.mlp.up_proj",
|
| 120 |
-
"model.language_model.layers.
|
| 121 |
-
"model.language_model.layers.14.self_attn.k_proj",
|
| 122 |
-
"model.language_model.layers.22.self_attn.q_proj",
|
| 123 |
-
"model.language_model.layers.4.mlp.down_proj",
|
| 124 |
-
"model.language_model.layers.22.mlp.gate_proj",
|
| 125 |
-
"model.language_model.layers.15.self_attn.v_proj",
|
| 126 |
-
"model.language_model.layers.21.self_attn.o_proj",
|
| 127 |
-
"model.language_model.layers.11.self_attn.o_proj",
|
| 128 |
-
"model.language_model.layers.20.mlp.up_proj",
|
| 129 |
-
"model.language_model.layers.16.self_attn.q_proj",
|
| 130 |
-
"model.language_model.layers.1.self_attn.k_proj",
|
| 131 |
-
"model.language_model.layers.24.mlp.gate_proj",
|
| 132 |
-
"model.language_model.layers.26.mlp.gate_proj",
|
| 133 |
"model.language_model.layers.2.self_attn.q_proj",
|
| 134 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
"model.language_model.layers.7.self_attn.q_proj",
|
| 136 |
-
"model.language_model.layers.
|
| 137 |
-
"model.language_model.layers.27.self_attn.q_proj",
|
| 138 |
-
"model.language_model.layers.29.mlp.up_proj",
|
| 139 |
-
"model.language_model.layers.28.self_attn.k_proj",
|
| 140 |
-
"model.language_model.layers.24.self_attn.o_proj",
|
| 141 |
-
"model.language_model.layers.26.self_attn.k_proj",
|
| 142 |
-
"model.language_model.layers.21.mlp.down_proj",
|
| 143 |
-
"model.language_model.layers.14.mlp.gate_proj",
|
| 144 |
-
"model.language_model.layers.25.mlp.up_proj",
|
| 145 |
"model.language_model.layers.27.mlp.down_proj",
|
| 146 |
-
"model.language_model.layers.20.self_attn.v_proj",
|
| 147 |
-
"model.language_model.layers.0.mlp.down_proj",
|
| 148 |
-
"model.language_model.layers.6.self_attn.v_proj",
|
| 149 |
-
"model.language_model.layers.4.self_attn.q_proj",
|
| 150 |
-
"model.language_model.layers.9.self_attn.q_proj",
|
| 151 |
-
"model.language_model.layers.0.self_attn.q_proj",
|
| 152 |
-
"model.language_model.layers.27.mlp.up_proj",
|
| 153 |
-
"model.language_model.layers.29.self_attn.k_proj",
|
| 154 |
-
"model.language_model.layers.29.self_attn.q_proj",
|
| 155 |
-
"model.language_model.layers.12.mlp.up_proj",
|
| 156 |
-
"model.language_model.layers.6.mlp.down_proj",
|
| 157 |
"model.language_model.layers.2.mlp.down_proj",
|
|
|
|
| 158 |
"model.language_model.layers.6.mlp.gate_proj",
|
| 159 |
-
"model.language_model.layers.24.self_attn.v_proj",
|
| 160 |
-
"model.language_model.layers.4.mlp.up_proj",
|
| 161 |
"model.language_model.layers.9.self_attn.o_proj",
|
| 162 |
-
"model.language_model.layers.
|
| 163 |
-
"model.language_model.layers.
|
| 164 |
-
"model.language_model.layers.
|
| 165 |
-
"model.language_model.layers.
|
| 166 |
-
"model.language_model.layers.
|
| 167 |
-
"model.language_model.layers.
|
| 168 |
-
"model.language_model.layers.
|
| 169 |
-
"model.language_model.layers.
|
| 170 |
-
"model.language_model.layers.21.self_attn.q_proj",
|
| 171 |
-
"model.language_model.layers.15.mlp.up_proj",
|
| 172 |
-
"model.language_model.layers.26.mlp.up_proj",
|
| 173 |
-
"model.language_model.layers.26.mlp.down_proj",
|
| 174 |
-
"model.language_model.layers.25.self_attn.o_proj",
|
| 175 |
"model.language_model.layers.8.self_attn.v_proj",
|
| 176 |
-
"model.language_model.layers.
|
| 177 |
-
"model.language_model.layers.
|
| 178 |
-
"model.language_model.layers.
|
| 179 |
-
"model.language_model.layers.
|
| 180 |
-
"model.language_model.layers.
|
| 181 |
-
"model.language_model.layers.
|
| 182 |
"model.language_model.layers.3.mlp.gate_proj",
|
| 183 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 184 |
"model.language_model.layers.9.mlp.up_proj",
|
| 185 |
-
"model.language_model.layers.
|
| 186 |
-
"model.language_model.layers.
|
| 187 |
-
"model.language_model.layers.
|
| 188 |
-
"model.language_model.layers.16.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 189 |
"model.language_model.layers.8.self_attn.k_proj",
|
| 190 |
-
"model.language_model.layers.12.
|
| 191 |
-
"model.language_model.layers.7.self_attn.o_proj",
|
| 192 |
-
"model.language_model.layers.18.mlp.up_proj",
|
| 193 |
-
"model.language_model.layers.13.mlp.up_proj",
|
| 194 |
-
"model.language_model.layers.16.mlp.up_proj",
|
| 195 |
-
"model.language_model.layers.17.self_attn.k_proj",
|
| 196 |
-
"model.language_model.layers.25.self_attn.k_proj",
|
| 197 |
-
"model.language_model.layers.8.self_attn.q_proj",
|
| 198 |
"model.language_model.layers.4.self_attn.v_proj",
|
| 199 |
-
"model.language_model.layers.
|
| 200 |
-
"model.language_model.layers.
|
| 201 |
-
"model.language_model.layers.
|
| 202 |
-
"model.language_model.layers.13.self_attn.k_proj",
|
| 203 |
-
"model.language_model.layers.7.self_attn.k_proj",
|
| 204 |
-
"model.language_model.layers.22.self_attn.o_proj",
|
| 205 |
-
"model.language_model.layers.22.mlp.up_proj",
|
| 206 |
-
"model.language_model.layers.16.self_attn.o_proj",
|
| 207 |
-
"model.language_model.layers.24.self_attn.q_proj",
|
| 208 |
-
"model.language_model.layers.12.self_attn.q_proj",
|
| 209 |
"model.language_model.layers.2.self_attn.v_proj",
|
| 210 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 211 |
"model.language_model.layers.13.mlp.gate_proj",
|
| 212 |
-
"model.language_model.layers.
|
| 213 |
-
"model.language_model.layers.
|
| 214 |
-
"model.language_model.layers.26.self_attn.v_proj",
|
| 215 |
-
"model.language_model.layers.28.mlp.up_proj",
|
| 216 |
-
"model.language_model.layers.19.mlp.up_proj",
|
| 217 |
-
"model.language_model.layers.16.mlp.gate_proj",
|
| 218 |
-
"model.language_model.layers.7.self_attn.v_proj",
|
| 219 |
-
"model.language_model.layers.25.mlp.gate_proj",
|
| 220 |
-
"model.language_model.layers.13.self_attn.v_proj",
|
| 221 |
"model.language_model.layers.20.self_attn.q_proj",
|
| 222 |
-
"model.language_model.layers.
|
| 223 |
-
"model.language_model.layers.
|
| 224 |
-
"model.language_model.layers.11.mlp.down_proj",
|
| 225 |
-
"model.language_model.layers.0.self_attn.k_proj",
|
| 226 |
-
"model.language_model.layers.21.self_attn.v_proj",
|
| 227 |
-
"model.language_model.layers.28.self_attn.q_proj",
|
| 228 |
"model.language_model.layers.29.self_attn.o_proj",
|
| 229 |
"model.language_model.layers.11.self_attn.k_proj",
|
| 230 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 231 |
"model.language_model.layers.7.mlp.up_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 232 |
"model.language_model.layers.22.mlp.down_proj",
|
|
|
|
|
|
|
| 233 |
"model.language_model.layers.20.self_attn.o_proj",
|
| 234 |
-
"model.language_model.layers.
|
| 235 |
-
"model.language_model.layers.
|
| 236 |
-
"model.language_model.layers.
|
| 237 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 238 |
],
|
| 239 |
"target_parameters": null,
|
| 240 |
"task_type": "CAUSAL_LM",
|
|
|
|
| 24 |
"megatron_core": "megatron.core",
|
| 25 |
"modules_to_save": null,
|
| 26 |
"peft_type": "LORA",
|
| 27 |
+
"peft_version": "0.18.2.dev0@7a4b07f2070162972f8c0515bc3acd19f81c0ad7",
|
| 28 |
"qalora_group_size": 16,
|
| 29 |
"r": 16,
|
| 30 |
"rank_pattern": {},
|
| 31 |
"revision": null,
|
| 32 |
"target_modules": [
|
| 33 |
+
"model.language_model.layers.17.self_attn.q_proj",
|
| 34 |
+
"model.language_model.layers.7.self_attn.o_proj",
|
| 35 |
+
"model.language_model.layers.3.mlp.down_proj",
|
| 36 |
+
"model.language_model.layers.14.mlp.up_proj",
|
| 37 |
+
"model.language_model.layers.17.self_attn.k_proj",
|
| 38 |
+
"model.language_model.layers.25.self_attn.o_proj",
|
| 39 |
+
"model.language_model.layers.6.self_attn.q_proj",
|
| 40 |
+
"model.language_model.layers.5.self_attn.q_proj",
|
| 41 |
+
"model.language_model.layers.1.mlp.gate_proj",
|
| 42 |
"model.language_model.layers.17.mlp.up_proj",
|
| 43 |
+
"model.language_model.layers.5.self_attn.k_proj",
|
| 44 |
+
"model.language_model.layers.16.self_attn.o_proj",
|
| 45 |
+
"model.language_model.layers.18.mlp.up_proj",
|
| 46 |
+
"model.language_model.layers.25.self_attn.k_proj",
|
| 47 |
+
"model.language_model.layers.23.mlp.down_proj",
|
| 48 |
+
"model.language_model.layers.27.mlp.up_proj",
|
| 49 |
"model.language_model.layers.27.self_attn.k_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
"model.language_model.layers.5.self_attn.o_proj",
|
| 51 |
+
"model.language_model.layers.22.self_attn.k_proj",
|
| 52 |
+
"model.language_model.layers.1.mlp.down_proj",
|
| 53 |
"model.language_model.layers.2.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
"model.language_model.layers.26.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
"model.language_model.layers.7.mlp.gate_proj",
|
| 56 |
+
"model.language_model.layers.24.self_attn.q_proj",
|
| 57 |
+
"model.language_model.layers.3.self_attn.o_proj",
|
| 58 |
+
"model.language_model.layers.0.self_attn.q_proj",
|
| 59 |
+
"model.language_model.layers.21.self_attn.k_proj",
|
| 60 |
+
"model.language_model.layers.23.self_attn.o_proj",
|
| 61 |
+
"model.language_model.layers.9.self_attn.q_proj",
|
| 62 |
+
"model.language_model.layers.5.mlp.gate_proj",
|
| 63 |
+
"model.language_model.layers.10.self_attn.v_proj",
|
| 64 |
"model.language_model.layers.8.mlp.up_proj",
|
| 65 |
+
"model.language_model.layers.26.self_attn.v_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
"model.language_model.layers.2.self_attn.q_proj",
|
| 67 |
+
"model.language_model.layers.13.self_attn.o_proj",
|
| 68 |
+
"model.language_model.layers.7.mlp.down_proj",
|
| 69 |
+
"model.language_model.layers.24.mlp.down_proj",
|
| 70 |
+
"model.language_model.layers.6.self_attn.k_proj",
|
| 71 |
+
"model.language_model.layers.0.self_attn.k_proj",
|
| 72 |
+
"model.language_model.layers.1.mlp.up_proj",
|
| 73 |
+
"model.language_model.layers.28.mlp.down_proj",
|
| 74 |
+
"model.language_model.layers.2.self_attn.k_proj",
|
| 75 |
+
"model.language_model.layers.22.mlp.up_proj",
|
| 76 |
"model.language_model.layers.7.self_attn.q_proj",
|
| 77 |
+
"model.language_model.layers.22.self_attn.q_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
"model.language_model.layers.27.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
"model.language_model.layers.2.mlp.down_proj",
|
| 80 |
+
"model.language_model.layers.19.mlp.down_proj",
|
| 81 |
"model.language_model.layers.6.mlp.gate_proj",
|
|
|
|
|
|
|
| 82 |
"model.language_model.layers.9.self_attn.o_proj",
|
| 83 |
+
"model.language_model.layers.15.mlp.down_proj",
|
| 84 |
+
"model.language_model.layers.4.self_attn.o_proj",
|
| 85 |
+
"model.language_model.layers.29.self_attn.k_proj",
|
| 86 |
+
"model.language_model.layers.18.self_attn.q_proj",
|
| 87 |
+
"model.language_model.layers.11.mlp.down_proj",
|
| 88 |
+
"model.language_model.layers.26.mlp.gate_proj",
|
| 89 |
+
"model.language_model.layers.23.mlp.up_proj",
|
| 90 |
+
"model.language_model.layers.0.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
"model.language_model.layers.8.self_attn.v_proj",
|
| 92 |
+
"model.language_model.layers.14.self_attn.k_proj",
|
| 93 |
+
"model.language_model.layers.21.mlp.up_proj",
|
| 94 |
+
"model.language_model.layers.10.self_attn.o_proj",
|
| 95 |
+
"model.language_model.layers.24.mlp.gate_proj",
|
| 96 |
+
"model.language_model.layers.28.mlp.up_proj",
|
| 97 |
+
"model.language_model.layers.29.mlp.down_proj",
|
| 98 |
"model.language_model.layers.3.mlp.gate_proj",
|
| 99 |
+
"model.language_model.layers.8.mlp.down_proj",
|
| 100 |
+
"model.language_model.layers.9.mlp.down_proj",
|
| 101 |
+
"model.language_model.layers.18.mlp.down_proj",
|
| 102 |
+
"model.language_model.layers.19.mlp.gate_proj",
|
| 103 |
+
"model.language_model.layers.26.mlp.down_proj",
|
| 104 |
+
"model.language_model.layers.9.self_attn.v_proj",
|
| 105 |
"model.language_model.layers.9.mlp.up_proj",
|
| 106 |
+
"model.language_model.layers.10.self_attn.q_proj",
|
| 107 |
+
"model.language_model.layers.11.self_attn.q_proj",
|
| 108 |
+
"model.language_model.layers.18.mlp.gate_proj",
|
| 109 |
+
"model.language_model.layers.16.self_attn.v_proj",
|
| 110 |
+
"model.language_model.layers.1.self_attn.k_proj",
|
| 111 |
+
"model.language_model.layers.25.mlp.up_proj",
|
| 112 |
+
"model.language_model.layers.28.self_attn.v_proj",
|
| 113 |
+
"model.language_model.layers.15.mlp.gate_proj",
|
| 114 |
+
"model.language_model.layers.9.self_attn.k_proj",
|
| 115 |
+
"model.language_model.layers.27.mlp.gate_proj",
|
| 116 |
+
"model.language_model.layers.14.self_attn.o_proj",
|
| 117 |
+
"model.language_model.layers.22.mlp.gate_proj",
|
| 118 |
+
"model.language_model.layers.14.mlp.down_proj",
|
| 119 |
"model.language_model.layers.8.self_attn.k_proj",
|
| 120 |
+
"model.language_model.layers.12.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
"model.language_model.layers.4.self_attn.v_proj",
|
| 122 |
+
"model.language_model.layers.10.mlp.down_proj",
|
| 123 |
+
"model.language_model.layers.24.mlp.up_proj",
|
| 124 |
+
"model.language_model.layers.25.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
"model.language_model.layers.2.self_attn.v_proj",
|
| 126 |
+
"model.language_model.layers.4.self_attn.k_proj",
|
| 127 |
+
"model.language_model.layers.8.self_attn.q_proj",
|
| 128 |
+
"model.language_model.layers.18.self_attn.v_proj",
|
| 129 |
+
"model.language_model.layers.27.self_attn.o_proj",
|
| 130 |
+
"model.language_model.layers.16.self_attn.q_proj",
|
| 131 |
+
"model.language_model.layers.3.mlp.up_proj",
|
| 132 |
"model.language_model.layers.13.mlp.gate_proj",
|
| 133 |
+
"model.language_model.layers.17.mlp.down_proj",
|
| 134 |
+
"model.language_model.layers.28.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
"model.language_model.layers.20.self_attn.q_proj",
|
| 136 |
+
"model.language_model.layers.0.mlp.up_proj",
|
| 137 |
+
"model.language_model.layers.16.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
| 138 |
"model.language_model.layers.29.self_attn.o_proj",
|
| 139 |
"model.language_model.layers.11.self_attn.k_proj",
|
| 140 |
+
"model.language_model.layers.20.self_attn.v_proj",
|
| 141 |
+
"model.language_model.layers.14.self_attn.v_proj",
|
| 142 |
+
"model.language_model.layers.11.mlp.gate_proj",
|
| 143 |
+
"model.language_model.layers.21.mlp.down_proj",
|
| 144 |
+
"model.language_model.layers.12.mlp.up_proj",
|
| 145 |
+
"model.language_model.layers.10.mlp.gate_proj",
|
| 146 |
+
"model.language_model.layers.10.self_attn.k_proj",
|
| 147 |
+
"model.language_model.layers.27.self_attn.q_proj",
|
| 148 |
+
"model.language_model.layers.8.mlp.gate_proj",
|
| 149 |
+
"model.language_model.layers.19.self_attn.q_proj",
|
| 150 |
+
"model.language_model.layers.23.self_attn.k_proj",
|
| 151 |
+
"model.language_model.layers.13.self_attn.q_proj",
|
| 152 |
+
"model.language_model.layers.0.self_attn.v_proj",
|
| 153 |
+
"model.language_model.layers.8.self_attn.o_proj",
|
| 154 |
+
"model.language_model.layers.0.mlp.gate_proj",
|
| 155 |
+
"model.language_model.layers.17.mlp.gate_proj",
|
| 156 |
+
"model.language_model.layers.1.self_attn.o_proj",
|
| 157 |
+
"model.language_model.layers.14.self_attn.q_proj",
|
| 158 |
+
"model.language_model.layers.14.mlp.gate_proj",
|
| 159 |
+
"model.language_model.layers.12.mlp.down_proj",
|
| 160 |
+
"model.language_model.layers.21.self_attn.o_proj",
|
| 161 |
+
"model.language_model.layers.5.mlp.up_proj",
|
| 162 |
+
"model.language_model.layers.20.mlp.up_proj",
|
| 163 |
+
"model.language_model.layers.13.mlp.up_proj",
|
| 164 |
+
"model.language_model.layers.18.self_attn.k_proj",
|
| 165 |
+
"model.language_model.layers.23.mlp.gate_proj",
|
| 166 |
+
"model.language_model.layers.4.mlp.down_proj",
|
| 167 |
+
"model.language_model.layers.24.self_attn.o_proj",
|
| 168 |
+
"model.language_model.layers.28.self_attn.k_proj",
|
| 169 |
+
"model.language_model.layers.13.self_attn.v_proj",
|
| 170 |
+
"model.language_model.layers.6.mlp.down_proj",
|
| 171 |
+
"model.language_model.layers.13.mlp.down_proj",
|
| 172 |
+
"model.language_model.layers.21.self_attn.q_proj",
|
| 173 |
+
"model.language_model.layers.10.mlp.up_proj",
|
| 174 |
+
"model.language_model.layers.15.self_attn.v_proj",
|
| 175 |
+
"model.language_model.layers.0.self_attn.o_proj",
|
| 176 |
+
"model.language_model.layers.9.mlp.gate_proj",
|
| 177 |
+
"model.language_model.layers.16.mlp.up_proj",
|
| 178 |
+
"model.language_model.layers.11.self_attn.o_proj",
|
| 179 |
+
"model.language_model.layers.17.self_attn.o_proj",
|
| 180 |
+
"model.language_model.layers.20.mlp.gate_proj",
|
| 181 |
+
"model.language_model.layers.26.mlp.up_proj",
|
| 182 |
+
"model.language_model.layers.15.mlp.up_proj",
|
| 183 |
+
"model.language_model.layers.12.mlp.gate_proj",
|
| 184 |
+
"model.language_model.layers.22.self_attn.o_proj",
|
| 185 |
+
"model.language_model.layers.28.mlp.gate_proj",
|
| 186 |
+
"model.language_model.layers.21.mlp.gate_proj",
|
| 187 |
+
"model.language_model.layers.2.mlp.up_proj",
|
| 188 |
+
"model.language_model.layers.28.self_attn.q_proj",
|
| 189 |
+
"model.language_model.layers.29.self_attn.q_proj",
|
| 190 |
"model.language_model.layers.7.mlp.up_proj",
|
| 191 |
+
"model.language_model.layers.15.self_attn.q_proj",
|
| 192 |
+
"model.language_model.layers.19.self_attn.k_proj",
|
| 193 |
+
"model.language_model.layers.7.self_attn.v_proj",
|
| 194 |
+
"model.language_model.layers.29.mlp.gate_proj",
|
| 195 |
+
"model.language_model.layers.24.self_attn.k_proj",
|
| 196 |
+
"model.language_model.layers.16.mlp.gate_proj",
|
| 197 |
+
"model.language_model.layers.12.self_attn.k_proj",
|
| 198 |
+
"model.language_model.layers.4.mlp.up_proj",
|
| 199 |
+
"model.language_model.layers.20.mlp.down_proj",
|
| 200 |
+
"model.language_model.layers.5.mlp.down_proj",
|
| 201 |
"model.language_model.layers.22.mlp.down_proj",
|
| 202 |
+
"model.language_model.layers.3.self_attn.q_proj",
|
| 203 |
+
"model.language_model.layers.26.self_attn.k_proj",
|
| 204 |
"model.language_model.layers.20.self_attn.o_proj",
|
| 205 |
+
"model.language_model.layers.24.self_attn.v_proj",
|
| 206 |
+
"model.language_model.layers.21.self_attn.v_proj",
|
| 207 |
+
"model.language_model.layers.19.self_attn.o_proj",
|
| 208 |
+
"model.language_model.layers.29.mlp.up_proj",
|
| 209 |
+
"model.language_model.layers.13.self_attn.k_proj",
|
| 210 |
+
"model.language_model.layers.2.self_attn.o_proj",
|
| 211 |
+
"model.language_model.layers.16.self_attn.k_proj",
|
| 212 |
+
"model.language_model.layers.22.self_attn.v_proj",
|
| 213 |
+
"model.language_model.layers.25.self_attn.v_proj",
|
| 214 |
+
"model.language_model.layers.25.mlp.down_proj",
|
| 215 |
+
"model.language_model.layers.4.mlp.gate_proj",
|
| 216 |
+
"model.language_model.layers.6.self_attn.o_proj",
|
| 217 |
+
"model.language_model.layers.25.self_attn.q_proj",
|
| 218 |
+
"model.language_model.layers.7.self_attn.k_proj",
|
| 219 |
+
"model.language_model.layers.11.mlp.up_proj",
|
| 220 |
+
"model.language_model.layers.20.self_attn.k_proj",
|
| 221 |
+
"model.language_model.layers.6.mlp.up_proj",
|
| 222 |
+
"model.language_model.layers.15.self_attn.k_proj",
|
| 223 |
+
"model.language_model.layers.19.mlp.up_proj",
|
| 224 |
+
"model.language_model.layers.12.self_attn.q_proj",
|
| 225 |
+
"model.language_model.layers.4.self_attn.q_proj",
|
| 226 |
+
"model.language_model.layers.18.self_attn.o_proj",
|
| 227 |
+
"model.language_model.layers.1.self_attn.v_proj",
|
| 228 |
+
"model.language_model.layers.15.self_attn.o_proj",
|
| 229 |
+
"model.language_model.layers.19.self_attn.v_proj",
|
| 230 |
+
"model.language_model.layers.6.self_attn.v_proj",
|
| 231 |
+
"model.language_model.layers.12.self_attn.v_proj",
|
| 232 |
+
"model.language_model.layers.3.self_attn.k_proj",
|
| 233 |
+
"model.language_model.layers.26.self_attn.q_proj",
|
| 234 |
+
"model.language_model.layers.1.self_attn.q_proj",
|
| 235 |
+
"model.language_model.layers.27.self_attn.v_proj",
|
| 236 |
+
"model.language_model.layers.3.self_attn.v_proj",
|
| 237 |
+
"model.language_model.layers.23.self_attn.q_proj"
|
| 238 |
],
|
| 239 |
"target_parameters": null,
|
| 240 |
"task_type": "CAUSAL_LM",
|
adapter_model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 74403016
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f9d4da55f38e24757e8eb8962365a466a0ffc7d89aaac9ffdc75a3d30f2c4855
|
| 3 |
size 74403016
|
checkpoint-121/adapter_config.json
CHANGED
|
@@ -24,217 +24,217 @@
|
|
| 24 |
"megatron_core": "megatron.core",
|
| 25 |
"modules_to_save": null,
|
| 26 |
"peft_type": "LORA",
|
| 27 |
-
"peft_version": "0.18.2.dev0@
|
| 28 |
"qalora_group_size": 16,
|
| 29 |
"r": 16,
|
| 30 |
"rank_pattern": {},
|
| 31 |
"revision": null,
|
| 32 |
"target_modules": [
|
| 33 |
-
"model.language_model.layers.
|
| 34 |
-
"model.language_model.layers.
|
| 35 |
-
"model.language_model.layers.3.mlp.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
"model.language_model.layers.17.mlp.up_proj",
|
| 37 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
"model.language_model.layers.27.self_attn.k_proj",
|
| 39 |
-
"model.language_model.layers.28.mlp.down_proj",
|
| 40 |
-
"model.language_model.layers.6.mlp.up_proj",
|
| 41 |
-
"model.language_model.layers.24.self_attn.k_proj",
|
| 42 |
-
"model.language_model.layers.6.self_attn.q_proj",
|
| 43 |
-
"model.language_model.layers.17.self_attn.q_proj",
|
| 44 |
-
"model.language_model.layers.15.self_attn.k_proj",
|
| 45 |
-
"model.language_model.layers.24.mlp.up_proj",
|
| 46 |
-
"model.language_model.layers.19.mlp.gate_proj",
|
| 47 |
-
"model.language_model.layers.16.self_attn.k_proj",
|
| 48 |
-
"model.language_model.layers.26.self_attn.q_proj",
|
| 49 |
-
"model.language_model.layers.21.mlp.up_proj",
|
| 50 |
-
"model.language_model.layers.17.mlp.down_proj",
|
| 51 |
-
"model.language_model.layers.10.self_attn.v_proj",
|
| 52 |
-
"model.language_model.layers.25.mlp.down_proj",
|
| 53 |
-
"model.language_model.layers.11.mlp.up_proj",
|
| 54 |
-
"model.language_model.layers.2.self_attn.o_proj",
|
| 55 |
-
"model.language_model.layers.15.mlp.down_proj",
|
| 56 |
-
"model.language_model.layers.10.self_attn.k_proj",
|
| 57 |
-
"model.language_model.layers.15.self_attn.q_proj",
|
| 58 |
-
"model.language_model.layers.9.self_attn.v_proj",
|
| 59 |
-
"model.language_model.layers.27.self_attn.o_proj",
|
| 60 |
-
"model.language_model.layers.3.self_attn.v_proj",
|
| 61 |
-
"model.language_model.layers.10.self_attn.q_proj",
|
| 62 |
-
"model.language_model.layers.21.mlp.gate_proj",
|
| 63 |
-
"model.language_model.layers.25.self_attn.q_proj",
|
| 64 |
"model.language_model.layers.5.self_attn.o_proj",
|
|
|
|
|
|
|
| 65 |
"model.language_model.layers.2.mlp.gate_proj",
|
| 66 |
-
"model.language_model.layers.9.mlp.gate_proj",
|
| 67 |
-
"model.language_model.layers.19.self_attn.v_proj",
|
| 68 |
-
"model.language_model.layers.18.self_attn.k_proj",
|
| 69 |
-
"model.language_model.layers.19.mlp.down_proj",
|
| 70 |
-
"model.language_model.layers.23.self_attn.o_proj",
|
| 71 |
-
"model.language_model.layers.27.mlp.gate_proj",
|
| 72 |
-
"model.language_model.layers.0.mlp.up_proj",
|
| 73 |
-
"model.language_model.layers.20.mlp.gate_proj",
|
| 74 |
-
"model.language_model.layers.28.self_attn.o_proj",
|
| 75 |
-
"model.language_model.layers.4.self_attn.o_proj",
|
| 76 |
-
"model.language_model.layers.28.self_attn.v_proj",
|
| 77 |
-
"model.language_model.layers.11.self_attn.q_proj",
|
| 78 |
"model.language_model.layers.26.self_attn.o_proj",
|
| 79 |
-
"model.language_model.layers.9.mlp.down_proj",
|
| 80 |
-
"model.language_model.layers.27.self_attn.v_proj",
|
| 81 |
-
"model.language_model.layers.23.mlp.up_proj",
|
| 82 |
-
"model.language_model.layers.2.mlp.up_proj",
|
| 83 |
-
"model.language_model.layers.0.mlp.gate_proj",
|
| 84 |
-
"model.language_model.layers.18.self_attn.o_proj",
|
| 85 |
-
"model.language_model.layers.19.self_attn.k_proj",
|
| 86 |
-
"model.language_model.layers.10.mlp.down_proj",
|
| 87 |
-
"model.language_model.layers.10.mlp.gate_proj",
|
| 88 |
-
"model.language_model.layers.0.self_attn.o_proj",
|
| 89 |
-
"model.language_model.layers.20.mlp.down_proj",
|
| 90 |
-
"model.language_model.layers.10.self_attn.o_proj",
|
| 91 |
-
"model.language_model.layers.15.self_attn.o_proj",
|
| 92 |
-
"model.language_model.layers.18.mlp.down_proj",
|
| 93 |
-
"model.language_model.layers.1.self_attn.v_proj",
|
| 94 |
-
"model.language_model.layers.13.self_attn.q_proj",
|
| 95 |
-
"model.language_model.layers.18.self_attn.q_proj",
|
| 96 |
-
"model.language_model.layers.3.mlp.down_proj",
|
| 97 |
-
"model.language_model.layers.20.self_attn.k_proj",
|
| 98 |
-
"model.language_model.layers.14.self_attn.o_proj",
|
| 99 |
-
"model.language_model.layers.7.mlp.down_proj",
|
| 100 |
-
"model.language_model.layers.25.self_attn.v_proj",
|
| 101 |
-
"model.language_model.layers.29.mlp.gate_proj",
|
| 102 |
-
"model.language_model.layers.2.self_attn.k_proj",
|
| 103 |
-
"model.language_model.layers.5.self_attn.k_proj",
|
| 104 |
-
"model.language_model.layers.9.self_attn.k_proj",
|
| 105 |
-
"model.language_model.layers.1.mlp.gate_proj",
|
| 106 |
-
"model.language_model.layers.8.self_attn.o_proj",
|
| 107 |
-
"model.language_model.layers.22.self_attn.k_proj",
|
| 108 |
-
"model.language_model.layers.3.self_attn.q_proj",
|
| 109 |
-
"model.language_model.layers.23.self_attn.k_proj",
|
| 110 |
-
"model.language_model.layers.3.self_attn.k_proj",
|
| 111 |
-
"model.language_model.layers.19.self_attn.q_proj",
|
| 112 |
-
"model.language_model.layers.18.self_attn.v_proj",
|
| 113 |
-
"model.language_model.layers.10.mlp.up_proj",
|
| 114 |
-
"model.language_model.layers.11.mlp.gate_proj",
|
| 115 |
-
"model.language_model.layers.1.mlp.up_proj",
|
| 116 |
-
"model.language_model.layers.18.mlp.gate_proj",
|
| 117 |
-
"model.language_model.layers.8.mlp.gate_proj",
|
| 118 |
"model.language_model.layers.7.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119 |
"model.language_model.layers.8.mlp.up_proj",
|
| 120 |
-
"model.language_model.layers.
|
| 121 |
-
"model.language_model.layers.14.self_attn.k_proj",
|
| 122 |
-
"model.language_model.layers.22.self_attn.q_proj",
|
| 123 |
-
"model.language_model.layers.4.mlp.down_proj",
|
| 124 |
-
"model.language_model.layers.22.mlp.gate_proj",
|
| 125 |
-
"model.language_model.layers.15.self_attn.v_proj",
|
| 126 |
-
"model.language_model.layers.21.self_attn.o_proj",
|
| 127 |
-
"model.language_model.layers.11.self_attn.o_proj",
|
| 128 |
-
"model.language_model.layers.20.mlp.up_proj",
|
| 129 |
-
"model.language_model.layers.16.self_attn.q_proj",
|
| 130 |
-
"model.language_model.layers.1.self_attn.k_proj",
|
| 131 |
-
"model.language_model.layers.24.mlp.gate_proj",
|
| 132 |
-
"model.language_model.layers.26.mlp.gate_proj",
|
| 133 |
"model.language_model.layers.2.self_attn.q_proj",
|
| 134 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
"model.language_model.layers.7.self_attn.q_proj",
|
| 136 |
-
"model.language_model.layers.
|
| 137 |
-
"model.language_model.layers.27.self_attn.q_proj",
|
| 138 |
-
"model.language_model.layers.29.mlp.up_proj",
|
| 139 |
-
"model.language_model.layers.28.self_attn.k_proj",
|
| 140 |
-
"model.language_model.layers.24.self_attn.o_proj",
|
| 141 |
-
"model.language_model.layers.26.self_attn.k_proj",
|
| 142 |
-
"model.language_model.layers.21.mlp.down_proj",
|
| 143 |
-
"model.language_model.layers.14.mlp.gate_proj",
|
| 144 |
-
"model.language_model.layers.25.mlp.up_proj",
|
| 145 |
"model.language_model.layers.27.mlp.down_proj",
|
| 146 |
-
"model.language_model.layers.20.self_attn.v_proj",
|
| 147 |
-
"model.language_model.layers.0.mlp.down_proj",
|
| 148 |
-
"model.language_model.layers.6.self_attn.v_proj",
|
| 149 |
-
"model.language_model.layers.4.self_attn.q_proj",
|
| 150 |
-
"model.language_model.layers.9.self_attn.q_proj",
|
| 151 |
-
"model.language_model.layers.0.self_attn.q_proj",
|
| 152 |
-
"model.language_model.layers.27.mlp.up_proj",
|
| 153 |
-
"model.language_model.layers.29.self_attn.k_proj",
|
| 154 |
-
"model.language_model.layers.29.self_attn.q_proj",
|
| 155 |
-
"model.language_model.layers.12.mlp.up_proj",
|
| 156 |
-
"model.language_model.layers.6.mlp.down_proj",
|
| 157 |
"model.language_model.layers.2.mlp.down_proj",
|
|
|
|
| 158 |
"model.language_model.layers.6.mlp.gate_proj",
|
| 159 |
-
"model.language_model.layers.24.self_attn.v_proj",
|
| 160 |
-
"model.language_model.layers.4.mlp.up_proj",
|
| 161 |
"model.language_model.layers.9.self_attn.o_proj",
|
| 162 |
-
"model.language_model.layers.
|
| 163 |
-
"model.language_model.layers.
|
| 164 |
-
"model.language_model.layers.
|
| 165 |
-
"model.language_model.layers.
|
| 166 |
-
"model.language_model.layers.
|
| 167 |
-
"model.language_model.layers.
|
| 168 |
-
"model.language_model.layers.
|
| 169 |
-
"model.language_model.layers.
|
| 170 |
-
"model.language_model.layers.21.self_attn.q_proj",
|
| 171 |
-
"model.language_model.layers.15.mlp.up_proj",
|
| 172 |
-
"model.language_model.layers.26.mlp.up_proj",
|
| 173 |
-
"model.language_model.layers.26.mlp.down_proj",
|
| 174 |
-
"model.language_model.layers.25.self_attn.o_proj",
|
| 175 |
"model.language_model.layers.8.self_attn.v_proj",
|
| 176 |
-
"model.language_model.layers.
|
| 177 |
-
"model.language_model.layers.
|
| 178 |
-
"model.language_model.layers.
|
| 179 |
-
"model.language_model.layers.
|
| 180 |
-
"model.language_model.layers.
|
| 181 |
-
"model.language_model.layers.
|
| 182 |
"model.language_model.layers.3.mlp.gate_proj",
|
| 183 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 184 |
"model.language_model.layers.9.mlp.up_proj",
|
| 185 |
-
"model.language_model.layers.
|
| 186 |
-
"model.language_model.layers.
|
| 187 |
-
"model.language_model.layers.
|
| 188 |
-
"model.language_model.layers.16.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 189 |
"model.language_model.layers.8.self_attn.k_proj",
|
| 190 |
-
"model.language_model.layers.12.
|
| 191 |
-
"model.language_model.layers.7.self_attn.o_proj",
|
| 192 |
-
"model.language_model.layers.18.mlp.up_proj",
|
| 193 |
-
"model.language_model.layers.13.mlp.up_proj",
|
| 194 |
-
"model.language_model.layers.16.mlp.up_proj",
|
| 195 |
-
"model.language_model.layers.17.self_attn.k_proj",
|
| 196 |
-
"model.language_model.layers.25.self_attn.k_proj",
|
| 197 |
-
"model.language_model.layers.8.self_attn.q_proj",
|
| 198 |
"model.language_model.layers.4.self_attn.v_proj",
|
| 199 |
-
"model.language_model.layers.
|
| 200 |
-
"model.language_model.layers.
|
| 201 |
-
"model.language_model.layers.
|
| 202 |
-
"model.language_model.layers.13.self_attn.k_proj",
|
| 203 |
-
"model.language_model.layers.7.self_attn.k_proj",
|
| 204 |
-
"model.language_model.layers.22.self_attn.o_proj",
|
| 205 |
-
"model.language_model.layers.22.mlp.up_proj",
|
| 206 |
-
"model.language_model.layers.16.self_attn.o_proj",
|
| 207 |
-
"model.language_model.layers.24.self_attn.q_proj",
|
| 208 |
-
"model.language_model.layers.12.self_attn.q_proj",
|
| 209 |
"model.language_model.layers.2.self_attn.v_proj",
|
| 210 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 211 |
"model.language_model.layers.13.mlp.gate_proj",
|
| 212 |
-
"model.language_model.layers.
|
| 213 |
-
"model.language_model.layers.
|
| 214 |
-
"model.language_model.layers.26.self_attn.v_proj",
|
| 215 |
-
"model.language_model.layers.28.mlp.up_proj",
|
| 216 |
-
"model.language_model.layers.19.mlp.up_proj",
|
| 217 |
-
"model.language_model.layers.16.mlp.gate_proj",
|
| 218 |
-
"model.language_model.layers.7.self_attn.v_proj",
|
| 219 |
-
"model.language_model.layers.25.mlp.gate_proj",
|
| 220 |
-
"model.language_model.layers.13.self_attn.v_proj",
|
| 221 |
"model.language_model.layers.20.self_attn.q_proj",
|
| 222 |
-
"model.language_model.layers.
|
| 223 |
-
"model.language_model.layers.
|
| 224 |
-
"model.language_model.layers.11.mlp.down_proj",
|
| 225 |
-
"model.language_model.layers.0.self_attn.k_proj",
|
| 226 |
-
"model.language_model.layers.21.self_attn.v_proj",
|
| 227 |
-
"model.language_model.layers.28.self_attn.q_proj",
|
| 228 |
"model.language_model.layers.29.self_attn.o_proj",
|
| 229 |
"model.language_model.layers.11.self_attn.k_proj",
|
| 230 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 231 |
"model.language_model.layers.7.mlp.up_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 232 |
"model.language_model.layers.22.mlp.down_proj",
|
|
|
|
|
|
|
| 233 |
"model.language_model.layers.20.self_attn.o_proj",
|
| 234 |
-
"model.language_model.layers.
|
| 235 |
-
"model.language_model.layers.
|
| 236 |
-
"model.language_model.layers.
|
| 237 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 238 |
],
|
| 239 |
"target_parameters": null,
|
| 240 |
"task_type": "CAUSAL_LM",
|
|
|
|
| 24 |
"megatron_core": "megatron.core",
|
| 25 |
"modules_to_save": null,
|
| 26 |
"peft_type": "LORA",
|
| 27 |
+
"peft_version": "0.18.2.dev0@7a4b07f2070162972f8c0515bc3acd19f81c0ad7",
|
| 28 |
"qalora_group_size": 16,
|
| 29 |
"r": 16,
|
| 30 |
"rank_pattern": {},
|
| 31 |
"revision": null,
|
| 32 |
"target_modules": [
|
| 33 |
+
"model.language_model.layers.17.self_attn.q_proj",
|
| 34 |
+
"model.language_model.layers.7.self_attn.o_proj",
|
| 35 |
+
"model.language_model.layers.3.mlp.down_proj",
|
| 36 |
+
"model.language_model.layers.14.mlp.up_proj",
|
| 37 |
+
"model.language_model.layers.17.self_attn.k_proj",
|
| 38 |
+
"model.language_model.layers.25.self_attn.o_proj",
|
| 39 |
+
"model.language_model.layers.6.self_attn.q_proj",
|
| 40 |
+
"model.language_model.layers.5.self_attn.q_proj",
|
| 41 |
+
"model.language_model.layers.1.mlp.gate_proj",
|
| 42 |
"model.language_model.layers.17.mlp.up_proj",
|
| 43 |
+
"model.language_model.layers.5.self_attn.k_proj",
|
| 44 |
+
"model.language_model.layers.16.self_attn.o_proj",
|
| 45 |
+
"model.language_model.layers.18.mlp.up_proj",
|
| 46 |
+
"model.language_model.layers.25.self_attn.k_proj",
|
| 47 |
+
"model.language_model.layers.23.mlp.down_proj",
|
| 48 |
+
"model.language_model.layers.27.mlp.up_proj",
|
| 49 |
"model.language_model.layers.27.self_attn.k_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
"model.language_model.layers.5.self_attn.o_proj",
|
| 51 |
+
"model.language_model.layers.22.self_attn.k_proj",
|
| 52 |
+
"model.language_model.layers.1.mlp.down_proj",
|
| 53 |
"model.language_model.layers.2.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
"model.language_model.layers.26.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
"model.language_model.layers.7.mlp.gate_proj",
|
| 56 |
+
"model.language_model.layers.24.self_attn.q_proj",
|
| 57 |
+
"model.language_model.layers.3.self_attn.o_proj",
|
| 58 |
+
"model.language_model.layers.0.self_attn.q_proj",
|
| 59 |
+
"model.language_model.layers.21.self_attn.k_proj",
|
| 60 |
+
"model.language_model.layers.23.self_attn.o_proj",
|
| 61 |
+
"model.language_model.layers.9.self_attn.q_proj",
|
| 62 |
+
"model.language_model.layers.5.mlp.gate_proj",
|
| 63 |
+
"model.language_model.layers.10.self_attn.v_proj",
|
| 64 |
"model.language_model.layers.8.mlp.up_proj",
|
| 65 |
+
"model.language_model.layers.26.self_attn.v_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
"model.language_model.layers.2.self_attn.q_proj",
|
| 67 |
+
"model.language_model.layers.13.self_attn.o_proj",
|
| 68 |
+
"model.language_model.layers.7.mlp.down_proj",
|
| 69 |
+
"model.language_model.layers.24.mlp.down_proj",
|
| 70 |
+
"model.language_model.layers.6.self_attn.k_proj",
|
| 71 |
+
"model.language_model.layers.0.self_attn.k_proj",
|
| 72 |
+
"model.language_model.layers.1.mlp.up_proj",
|
| 73 |
+
"model.language_model.layers.28.mlp.down_proj",
|
| 74 |
+
"model.language_model.layers.2.self_attn.k_proj",
|
| 75 |
+
"model.language_model.layers.22.mlp.up_proj",
|
| 76 |
"model.language_model.layers.7.self_attn.q_proj",
|
| 77 |
+
"model.language_model.layers.22.self_attn.q_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
"model.language_model.layers.27.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
"model.language_model.layers.2.mlp.down_proj",
|
| 80 |
+
"model.language_model.layers.19.mlp.down_proj",
|
| 81 |
"model.language_model.layers.6.mlp.gate_proj",
|
|
|
|
|
|
|
| 82 |
"model.language_model.layers.9.self_attn.o_proj",
|
| 83 |
+
"model.language_model.layers.15.mlp.down_proj",
|
| 84 |
+
"model.language_model.layers.4.self_attn.o_proj",
|
| 85 |
+
"model.language_model.layers.29.self_attn.k_proj",
|
| 86 |
+
"model.language_model.layers.18.self_attn.q_proj",
|
| 87 |
+
"model.language_model.layers.11.mlp.down_proj",
|
| 88 |
+
"model.language_model.layers.26.mlp.gate_proj",
|
| 89 |
+
"model.language_model.layers.23.mlp.up_proj",
|
| 90 |
+
"model.language_model.layers.0.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
"model.language_model.layers.8.self_attn.v_proj",
|
| 92 |
+
"model.language_model.layers.14.self_attn.k_proj",
|
| 93 |
+
"model.language_model.layers.21.mlp.up_proj",
|
| 94 |
+
"model.language_model.layers.10.self_attn.o_proj",
|
| 95 |
+
"model.language_model.layers.24.mlp.gate_proj",
|
| 96 |
+
"model.language_model.layers.28.mlp.up_proj",
|
| 97 |
+
"model.language_model.layers.29.mlp.down_proj",
|
| 98 |
"model.language_model.layers.3.mlp.gate_proj",
|
| 99 |
+
"model.language_model.layers.8.mlp.down_proj",
|
| 100 |
+
"model.language_model.layers.9.mlp.down_proj",
|
| 101 |
+
"model.language_model.layers.18.mlp.down_proj",
|
| 102 |
+
"model.language_model.layers.19.mlp.gate_proj",
|
| 103 |
+
"model.language_model.layers.26.mlp.down_proj",
|
| 104 |
+
"model.language_model.layers.9.self_attn.v_proj",
|
| 105 |
"model.language_model.layers.9.mlp.up_proj",
|
| 106 |
+
"model.language_model.layers.10.self_attn.q_proj",
|
| 107 |
+
"model.language_model.layers.11.self_attn.q_proj",
|
| 108 |
+
"model.language_model.layers.18.mlp.gate_proj",
|
| 109 |
+
"model.language_model.layers.16.self_attn.v_proj",
|
| 110 |
+
"model.language_model.layers.1.self_attn.k_proj",
|
| 111 |
+
"model.language_model.layers.25.mlp.up_proj",
|
| 112 |
+
"model.language_model.layers.28.self_attn.v_proj",
|
| 113 |
+
"model.language_model.layers.15.mlp.gate_proj",
|
| 114 |
+
"model.language_model.layers.9.self_attn.k_proj",
|
| 115 |
+
"model.language_model.layers.27.mlp.gate_proj",
|
| 116 |
+
"model.language_model.layers.14.self_attn.o_proj",
|
| 117 |
+
"model.language_model.layers.22.mlp.gate_proj",
|
| 118 |
+
"model.language_model.layers.14.mlp.down_proj",
|
| 119 |
"model.language_model.layers.8.self_attn.k_proj",
|
| 120 |
+
"model.language_model.layers.12.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
"model.language_model.layers.4.self_attn.v_proj",
|
| 122 |
+
"model.language_model.layers.10.mlp.down_proj",
|
| 123 |
+
"model.language_model.layers.24.mlp.up_proj",
|
| 124 |
+
"model.language_model.layers.25.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
"model.language_model.layers.2.self_attn.v_proj",
|
| 126 |
+
"model.language_model.layers.4.self_attn.k_proj",
|
| 127 |
+
"model.language_model.layers.8.self_attn.q_proj",
|
| 128 |
+
"model.language_model.layers.18.self_attn.v_proj",
|
| 129 |
+
"model.language_model.layers.27.self_attn.o_proj",
|
| 130 |
+
"model.language_model.layers.16.self_attn.q_proj",
|
| 131 |
+
"model.language_model.layers.3.mlp.up_proj",
|
| 132 |
"model.language_model.layers.13.mlp.gate_proj",
|
| 133 |
+
"model.language_model.layers.17.mlp.down_proj",
|
| 134 |
+
"model.language_model.layers.28.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
"model.language_model.layers.20.self_attn.q_proj",
|
| 136 |
+
"model.language_model.layers.0.mlp.up_proj",
|
| 137 |
+
"model.language_model.layers.16.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
| 138 |
"model.language_model.layers.29.self_attn.o_proj",
|
| 139 |
"model.language_model.layers.11.self_attn.k_proj",
|
| 140 |
+
"model.language_model.layers.20.self_attn.v_proj",
|
| 141 |
+
"model.language_model.layers.14.self_attn.v_proj",
|
| 142 |
+
"model.language_model.layers.11.mlp.gate_proj",
|
| 143 |
+
"model.language_model.layers.21.mlp.down_proj",
|
| 144 |
+
"model.language_model.layers.12.mlp.up_proj",
|
| 145 |
+
"model.language_model.layers.10.mlp.gate_proj",
|
| 146 |
+
"model.language_model.layers.10.self_attn.k_proj",
|
| 147 |
+
"model.language_model.layers.27.self_attn.q_proj",
|
| 148 |
+
"model.language_model.layers.8.mlp.gate_proj",
|
| 149 |
+
"model.language_model.layers.19.self_attn.q_proj",
|
| 150 |
+
"model.language_model.layers.23.self_attn.k_proj",
|
| 151 |
+
"model.language_model.layers.13.self_attn.q_proj",
|
| 152 |
+
"model.language_model.layers.0.self_attn.v_proj",
|
| 153 |
+
"model.language_model.layers.8.self_attn.o_proj",
|
| 154 |
+
"model.language_model.layers.0.mlp.gate_proj",
|
| 155 |
+
"model.language_model.layers.17.mlp.gate_proj",
|
| 156 |
+
"model.language_model.layers.1.self_attn.o_proj",
|
| 157 |
+
"model.language_model.layers.14.self_attn.q_proj",
|
| 158 |
+
"model.language_model.layers.14.mlp.gate_proj",
|
| 159 |
+
"model.language_model.layers.12.mlp.down_proj",
|
| 160 |
+
"model.language_model.layers.21.self_attn.o_proj",
|
| 161 |
+
"model.language_model.layers.5.mlp.up_proj",
|
| 162 |
+
"model.language_model.layers.20.mlp.up_proj",
|
| 163 |
+
"model.language_model.layers.13.mlp.up_proj",
|
| 164 |
+
"model.language_model.layers.18.self_attn.k_proj",
|
| 165 |
+
"model.language_model.layers.23.mlp.gate_proj",
|
| 166 |
+
"model.language_model.layers.4.mlp.down_proj",
|
| 167 |
+
"model.language_model.layers.24.self_attn.o_proj",
|
| 168 |
+
"model.language_model.layers.28.self_attn.k_proj",
|
| 169 |
+
"model.language_model.layers.13.self_attn.v_proj",
|
| 170 |
+
"model.language_model.layers.6.mlp.down_proj",
|
| 171 |
+
"model.language_model.layers.13.mlp.down_proj",
|
| 172 |
+
"model.language_model.layers.21.self_attn.q_proj",
|
| 173 |
+
"model.language_model.layers.10.mlp.up_proj",
|
| 174 |
+
"model.language_model.layers.15.self_attn.v_proj",
|
| 175 |
+
"model.language_model.layers.0.self_attn.o_proj",
|
| 176 |
+
"model.language_model.layers.9.mlp.gate_proj",
|
| 177 |
+
"model.language_model.layers.16.mlp.up_proj",
|
| 178 |
+
"model.language_model.layers.11.self_attn.o_proj",
|
| 179 |
+
"model.language_model.layers.17.self_attn.o_proj",
|
| 180 |
+
"model.language_model.layers.20.mlp.gate_proj",
|
| 181 |
+
"model.language_model.layers.26.mlp.up_proj",
|
| 182 |
+
"model.language_model.layers.15.mlp.up_proj",
|
| 183 |
+
"model.language_model.layers.12.mlp.gate_proj",
|
| 184 |
+
"model.language_model.layers.22.self_attn.o_proj",
|
| 185 |
+
"model.language_model.layers.28.mlp.gate_proj",
|
| 186 |
+
"model.language_model.layers.21.mlp.gate_proj",
|
| 187 |
+
"model.language_model.layers.2.mlp.up_proj",
|
| 188 |
+
"model.language_model.layers.28.self_attn.q_proj",
|
| 189 |
+
"model.language_model.layers.29.self_attn.q_proj",
|
| 190 |
"model.language_model.layers.7.mlp.up_proj",
|
| 191 |
+
"model.language_model.layers.15.self_attn.q_proj",
|
| 192 |
+
"model.language_model.layers.19.self_attn.k_proj",
|
| 193 |
+
"model.language_model.layers.7.self_attn.v_proj",
|
| 194 |
+
"model.language_model.layers.29.mlp.gate_proj",
|
| 195 |
+
"model.language_model.layers.24.self_attn.k_proj",
|
| 196 |
+
"model.language_model.layers.16.mlp.gate_proj",
|
| 197 |
+
"model.language_model.layers.12.self_attn.k_proj",
|
| 198 |
+
"model.language_model.layers.4.mlp.up_proj",
|
| 199 |
+
"model.language_model.layers.20.mlp.down_proj",
|
| 200 |
+
"model.language_model.layers.5.mlp.down_proj",
|
| 201 |
"model.language_model.layers.22.mlp.down_proj",
|
| 202 |
+
"model.language_model.layers.3.self_attn.q_proj",
|
| 203 |
+
"model.language_model.layers.26.self_attn.k_proj",
|
| 204 |
"model.language_model.layers.20.self_attn.o_proj",
|
| 205 |
+
"model.language_model.layers.24.self_attn.v_proj",
|
| 206 |
+
"model.language_model.layers.21.self_attn.v_proj",
|
| 207 |
+
"model.language_model.layers.19.self_attn.o_proj",
|
| 208 |
+
"model.language_model.layers.29.mlp.up_proj",
|
| 209 |
+
"model.language_model.layers.13.self_attn.k_proj",
|
| 210 |
+
"model.language_model.layers.2.self_attn.o_proj",
|
| 211 |
+
"model.language_model.layers.16.self_attn.k_proj",
|
| 212 |
+
"model.language_model.layers.22.self_attn.v_proj",
|
| 213 |
+
"model.language_model.layers.25.self_attn.v_proj",
|
| 214 |
+
"model.language_model.layers.25.mlp.down_proj",
|
| 215 |
+
"model.language_model.layers.4.mlp.gate_proj",
|
| 216 |
+
"model.language_model.layers.6.self_attn.o_proj",
|
| 217 |
+
"model.language_model.layers.25.self_attn.q_proj",
|
| 218 |
+
"model.language_model.layers.7.self_attn.k_proj",
|
| 219 |
+
"model.language_model.layers.11.mlp.up_proj",
|
| 220 |
+
"model.language_model.layers.20.self_attn.k_proj",
|
| 221 |
+
"model.language_model.layers.6.mlp.up_proj",
|
| 222 |
+
"model.language_model.layers.15.self_attn.k_proj",
|
| 223 |
+
"model.language_model.layers.19.mlp.up_proj",
|
| 224 |
+
"model.language_model.layers.12.self_attn.q_proj",
|
| 225 |
+
"model.language_model.layers.4.self_attn.q_proj",
|
| 226 |
+
"model.language_model.layers.18.self_attn.o_proj",
|
| 227 |
+
"model.language_model.layers.1.self_attn.v_proj",
|
| 228 |
+
"model.language_model.layers.15.self_attn.o_proj",
|
| 229 |
+
"model.language_model.layers.19.self_attn.v_proj",
|
| 230 |
+
"model.language_model.layers.6.self_attn.v_proj",
|
| 231 |
+
"model.language_model.layers.12.self_attn.v_proj",
|
| 232 |
+
"model.language_model.layers.3.self_attn.k_proj",
|
| 233 |
+
"model.language_model.layers.26.self_attn.q_proj",
|
| 234 |
+
"model.language_model.layers.1.self_attn.q_proj",
|
| 235 |
+
"model.language_model.layers.27.self_attn.v_proj",
|
| 236 |
+
"model.language_model.layers.3.self_attn.v_proj",
|
| 237 |
+
"model.language_model.layers.23.self_attn.q_proj"
|
| 238 |
],
|
| 239 |
"target_parameters": null,
|
| 240 |
"task_type": "CAUSAL_LM",
|
checkpoint-121/adapter_model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 37232104
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b1ae622f35ab2e6792a139da50e98d7ef28f5e8f09e820dbf009b9f3e8b94a0c
|
| 3 |
size 37232104
|
checkpoint-121/optimizer.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5666cabc3b820b0ff2e713d921b0145061ad461872ec913541f9397f13205211
|
| 3 |
+
size 38237839
|
checkpoint-121/rng_state.pth
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 14645
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b6bdf76518a45d84478f951ab8beaeffe8beb547d3893d1ae00c3e09ecf21c8b
|
| 3 |
size 14645
|
checkpoint-121/scheduler.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1465
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:65cc4565c40d1efa1f6d66b589f484e98593cfb0fbda91711275a8867117b453
|
| 3 |
size 1465
|
checkpoint-121/tokenizer.json
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cc8d3a0ce36466ccc1278bf987df5f71db1719b9ca6b4118264f45cb627bfe0f
|
| 3 |
+
size 32169626
|
checkpoint-121/tokenizer_config.json
CHANGED
|
@@ -41,7 +41,7 @@
|
|
| 41 |
"think_token": "<|think|>"
|
| 42 |
},
|
| 43 |
"pad_token": "<pad>",
|
| 44 |
-
"padding_side": "
|
| 45 |
"processor_class": "Gemma4Processor",
|
| 46 |
"response_schema": {
|
| 47 |
"properties": {
|
|
|
|
| 41 |
"think_token": "<|think|>"
|
| 42 |
},
|
| 43 |
"pad_token": "<pad>",
|
| 44 |
+
"padding_side": "right",
|
| 45 |
"processor_class": "Gemma4Processor",
|
| 46 |
"response_schema": {
|
| 47 |
"properties": {
|
checkpoint-121/trainer_state.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
{
|
| 2 |
"best_global_step": 121,
|
| 3 |
-
"best_metric": 0.
|
| 4 |
-
"best_model_checkpoint": "/
|
| 5 |
"epoch": 1.0,
|
| 6 |
"eval_steps": 500,
|
| 7 |
"global_step": 121,
|
|
@@ -10,134 +10,134 @@
|
|
| 10 |
"is_world_process_zero": true,
|
| 11 |
"log_history": [
|
| 12 |
{
|
| 13 |
-
"entropy": 1.
|
| 14 |
"epoch": 0.0827300930713547,
|
| 15 |
-
"grad_norm":
|
| 16 |
"learning_rate": 1.8e-05,
|
| 17 |
-
"loss":
|
| 18 |
-
"mean_token_accuracy": 0.
|
| 19 |
-
"num_tokens":
|
| 20 |
"step": 10
|
| 21 |
},
|
| 22 |
{
|
| 23 |
-
"entropy": 0.
|
| 24 |
"epoch": 0.1654601861427094,
|
| 25 |
-
"grad_norm":
|
| 26 |
"learning_rate": 3.8e-05,
|
| 27 |
-
"loss":
|
| 28 |
-
"mean_token_accuracy": 0.
|
| 29 |
-
"num_tokens":
|
| 30 |
"step": 20
|
| 31 |
},
|
| 32 |
{
|
| 33 |
-
"entropy": 0.
|
| 34 |
"epoch": 0.2481902792140641,
|
| 35 |
-
"grad_norm":
|
| 36 |
"learning_rate": 5.8e-05,
|
| 37 |
-
"loss":
|
| 38 |
-
"mean_token_accuracy": 0.
|
| 39 |
-
"num_tokens":
|
| 40 |
"step": 30
|
| 41 |
},
|
| 42 |
{
|
| 43 |
-
"entropy":
|
| 44 |
"epoch": 0.3309203722854188,
|
| 45 |
-
"grad_norm":
|
| 46 |
"learning_rate": 7.800000000000001e-05,
|
| 47 |
-
"loss":
|
| 48 |
-
"mean_token_accuracy": 0.
|
| 49 |
-
"num_tokens":
|
| 50 |
"step": 40
|
| 51 |
},
|
| 52 |
{
|
| 53 |
-
"entropy":
|
| 54 |
"epoch": 0.4136504653567735,
|
| 55 |
-
"grad_norm":
|
| 56 |
"learning_rate": 9.8e-05,
|
| 57 |
-
"loss":
|
| 58 |
-
"mean_token_accuracy": 0.
|
| 59 |
-
"num_tokens":
|
| 60 |
"step": 50
|
| 61 |
},
|
| 62 |
{
|
| 63 |
-
"entropy": 0.
|
| 64 |
"epoch": 0.4963805584281282,
|
| 65 |
-
"grad_norm":
|
| 66 |
"learning_rate": 0.000118,
|
| 67 |
-
"loss":
|
| 68 |
-
"mean_token_accuracy": 0.
|
| 69 |
-
"num_tokens":
|
| 70 |
"step": 60
|
| 71 |
},
|
| 72 |
{
|
| 73 |
-
"entropy": 0.
|
| 74 |
"epoch": 0.5791106514994829,
|
| 75 |
-
"grad_norm":
|
| 76 |
"learning_rate": 0.000138,
|
| 77 |
-
"loss":
|
| 78 |
-
"mean_token_accuracy": 0.
|
| 79 |
-
"num_tokens":
|
| 80 |
"step": 70
|
| 81 |
},
|
| 82 |
{
|
| 83 |
-
"entropy": 0.
|
| 84 |
"epoch": 0.6618407445708376,
|
| 85 |
-
"grad_norm":
|
| 86 |
"learning_rate": 0.00015800000000000002,
|
| 87 |
-
"loss":
|
| 88 |
-
"mean_token_accuracy": 0.
|
| 89 |
-
"num_tokens":
|
| 90 |
"step": 80
|
| 91 |
},
|
| 92 |
{
|
| 93 |
-
"entropy": 0.
|
| 94 |
"epoch": 0.7445708376421923,
|
| 95 |
-
"grad_norm":
|
| 96 |
"learning_rate": 0.00017800000000000002,
|
| 97 |
-
"loss":
|
| 98 |
-
"mean_token_accuracy": 0.
|
| 99 |
-
"num_tokens":
|
| 100 |
"step": 90
|
| 101 |
},
|
| 102 |
{
|
| 103 |
-
"entropy": 0.
|
| 104 |
"epoch": 0.827300930713547,
|
| 105 |
-
"grad_norm": 0.
|
| 106 |
"learning_rate": 0.00019800000000000002,
|
| 107 |
-
"loss":
|
| 108 |
-
"mean_token_accuracy": 0.
|
| 109 |
-
"num_tokens":
|
| 110 |
"step": 100
|
| 111 |
},
|
| 112 |
{
|
| 113 |
-
"entropy": 0.
|
| 114 |
"epoch": 0.9100310237849017,
|
| 115 |
-
"grad_norm": 0.
|
| 116 |
"learning_rate": 0.00019942266891397815,
|
| 117 |
-
"loss":
|
| 118 |
-
"mean_token_accuracy": 0.
|
| 119 |
-
"num_tokens":
|
| 120 |
"step": 110
|
| 121 |
},
|
| 122 |
{
|
| 123 |
-
"entropy": 0.
|
| 124 |
"epoch": 0.9927611168562565,
|
| 125 |
-
"grad_norm": 0.
|
| 126 |
"learning_rate": 0.00019743551343638324,
|
| 127 |
-
"loss":
|
| 128 |
-
"mean_token_accuracy": 0.
|
| 129 |
-
"num_tokens":
|
| 130 |
"step": 120
|
| 131 |
},
|
| 132 |
{
|
| 133 |
"epoch": 1.0,
|
| 134 |
-
"eval_entropy": 0.
|
| 135 |
-
"eval_loss": 0.
|
| 136 |
-
"eval_mean_token_accuracy": 0.
|
| 137 |
-
"eval_num_tokens":
|
| 138 |
-
"eval_runtime":
|
| 139 |
-
"eval_samples_per_second":
|
| 140 |
-
"eval_steps_per_second":
|
| 141 |
"step": 121
|
| 142 |
}
|
| 143 |
],
|
|
@@ -158,7 +158,7 @@
|
|
| 158 |
"attributes": {}
|
| 159 |
}
|
| 160 |
},
|
| 161 |
-
"total_flos":
|
| 162 |
"train_batch_size": 1,
|
| 163 |
"trial_name": null,
|
| 164 |
"trial_params": null
|
|
|
|
| 1 |
{
|
| 2 |
"best_global_step": 121,
|
| 3 |
+
"best_metric": 0.4985087513923645,
|
| 4 |
+
"best_model_checkpoint": "/workspace/gemma4-26b-securecode/checkpoint-121",
|
| 5 |
"epoch": 1.0,
|
| 6 |
"eval_steps": 500,
|
| 7 |
"global_step": 121,
|
|
|
|
| 10 |
"is_world_process_zero": true,
|
| 11 |
"log_history": [
|
| 12 |
{
|
| 13 |
+
"entropy": 1.0907821020111441,
|
| 14 |
"epoch": 0.0827300930713547,
|
| 15 |
+
"grad_norm": 20.875,
|
| 16 |
"learning_rate": 1.8e-05,
|
| 17 |
+
"loss": 80.26775512695312,
|
| 18 |
+
"mean_token_accuracy": 0.4542873948812485,
|
| 19 |
+
"num_tokens": 326185.0,
|
| 20 |
"step": 10
|
| 21 |
},
|
| 22 |
{
|
| 23 |
+
"entropy": 0.8271314173936843,
|
| 24 |
"epoch": 0.1654601861427094,
|
| 25 |
+
"grad_norm": 8.75,
|
| 26 |
"learning_rate": 3.8e-05,
|
| 27 |
+
"loss": 58.08096923828125,
|
| 28 |
+
"mean_token_accuracy": 0.5611657274886965,
|
| 29 |
+
"num_tokens": 653865.0,
|
| 30 |
"step": 20
|
| 31 |
},
|
| 32 |
{
|
| 33 |
+
"entropy": 0.4787554959766567,
|
| 34 |
"epoch": 0.2481902792140641,
|
| 35 |
+
"grad_norm": 1.7109375,
|
| 36 |
"learning_rate": 5.8e-05,
|
| 37 |
+
"loss": 25.493240356445312,
|
| 38 |
+
"mean_token_accuracy": 0.7378443486988544,
|
| 39 |
+
"num_tokens": 981337.0,
|
| 40 |
"step": 30
|
| 41 |
},
|
| 42 |
{
|
| 43 |
+
"entropy": 0.7855595085769892,
|
| 44 |
"epoch": 0.3309203722854188,
|
| 45 |
+
"grad_norm": 0.8671875,
|
| 46 |
"learning_rate": 7.800000000000001e-05,
|
| 47 |
+
"loss": 14.629072570800782,
|
| 48 |
+
"mean_token_accuracy": 0.7917733617126942,
|
| 49 |
+
"num_tokens": 1308584.0,
|
| 50 |
"step": 40
|
| 51 |
},
|
| 52 |
{
|
| 53 |
+
"entropy": 0.7569877350702882,
|
| 54 |
"epoch": 0.4136504653567735,
|
| 55 |
+
"grad_norm": 2.109375,
|
| 56 |
"learning_rate": 9.8e-05,
|
| 57 |
+
"loss": 12.609142303466797,
|
| 58 |
+
"mean_token_accuracy": 0.8013272784650326,
|
| 59 |
+
"num_tokens": 1635098.0,
|
| 60 |
"step": 50
|
| 61 |
},
|
| 62 |
{
|
| 63 |
+
"entropy": 0.6735223602503538,
|
| 64 |
"epoch": 0.4963805584281282,
|
| 65 |
+
"grad_norm": 16.875,
|
| 66 |
"learning_rate": 0.000118,
|
| 67 |
+
"loss": 10.704925537109375,
|
| 68 |
+
"mean_token_accuracy": 0.8209844313561916,
|
| 69 |
+
"num_tokens": 1962302.0,
|
| 70 |
"step": 60
|
| 71 |
},
|
| 72 |
{
|
| 73 |
+
"entropy": 0.6005677949637175,
|
| 74 |
"epoch": 0.5791106514994829,
|
| 75 |
+
"grad_norm": 1.546875,
|
| 76 |
"learning_rate": 0.000138,
|
| 77 |
+
"loss": 9.783185577392578,
|
| 78 |
+
"mean_token_accuracy": 0.8308866504579783,
|
| 79 |
+
"num_tokens": 2289982.0,
|
| 80 |
"step": 70
|
| 81 |
},
|
| 82 |
{
|
| 83 |
+
"entropy": 0.5877057909965515,
|
| 84 |
"epoch": 0.6618407445708376,
|
| 85 |
+
"grad_norm": 11.25,
|
| 86 |
"learning_rate": 0.00015800000000000002,
|
| 87 |
+
"loss": 9.298844909667968,
|
| 88 |
+
"mean_token_accuracy": 0.8359990835189819,
|
| 89 |
+
"num_tokens": 2616786.0,
|
| 90 |
"step": 80
|
| 91 |
},
|
| 92 |
{
|
| 93 |
+
"entropy": 0.5447238819673658,
|
| 94 |
"epoch": 0.7445708376421923,
|
| 95 |
+
"grad_norm": 1.2890625,
|
| 96 |
"learning_rate": 0.00017800000000000002,
|
| 97 |
+
"loss": 8.777264404296876,
|
| 98 |
+
"mean_token_accuracy": 0.8440194871276617,
|
| 99 |
+
"num_tokens": 2941975.0,
|
| 100 |
"step": 90
|
| 101 |
},
|
| 102 |
{
|
| 103 |
+
"entropy": 0.5323287105187774,
|
| 104 |
"epoch": 0.827300930713547,
|
| 105 |
+
"grad_norm": 0.70703125,
|
| 106 |
"learning_rate": 0.00019800000000000002,
|
| 107 |
+
"loss": 8.489185333251953,
|
| 108 |
+
"mean_token_accuracy": 0.8486687760800123,
|
| 109 |
+
"num_tokens": 3269655.0,
|
| 110 |
"step": 100
|
| 111 |
},
|
| 112 |
{
|
| 113 |
+
"entropy": 0.4949887519702315,
|
| 114 |
"epoch": 0.9100310237849017,
|
| 115 |
+
"grad_norm": 0.439453125,
|
| 116 |
"learning_rate": 0.00019942266891397815,
|
| 117 |
+
"loss": 8.192723083496094,
|
| 118 |
+
"mean_token_accuracy": 0.8528529018163681,
|
| 119 |
+
"num_tokens": 3595193.0,
|
| 120 |
"step": 110
|
| 121 |
},
|
| 122 |
{
|
| 123 |
+
"entropy": 0.4980895221233368,
|
| 124 |
"epoch": 0.9927611168562565,
|
| 125 |
+
"grad_norm": 0.921875,
|
| 126 |
"learning_rate": 0.00019743551343638324,
|
| 127 |
+
"loss": 7.908926391601563,
|
| 128 |
+
"mean_token_accuracy": 0.8567473825067282,
|
| 129 |
+
"num_tokens": 3922475.0,
|
| 130 |
"step": 120
|
| 131 |
},
|
| 132 |
{
|
| 133 |
"epoch": 1.0,
|
| 134 |
+
"eval_entropy": 0.5629051625728607,
|
| 135 |
+
"eval_loss": 0.4985087513923645,
|
| 136 |
+
"eval_mean_token_accuracy": 0.8571729124978531,
|
| 137 |
+
"eval_num_tokens": 3949618.0,
|
| 138 |
+
"eval_runtime": 122.3216,
|
| 139 |
+
"eval_samples_per_second": 1.758,
|
| 140 |
+
"eval_steps_per_second": 1.758,
|
| 141 |
"step": 121
|
| 142 |
}
|
| 143 |
],
|
|
|
|
| 158 |
"attributes": {}
|
| 159 |
}
|
| 160 |
},
|
| 161 |
+
"total_flos": 5.944883327916494e+17,
|
| 162 |
"train_batch_size": 1,
|
| 163 |
"trial_name": null,
|
| 164 |
"trial_params": null
|
checkpoint-121/training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5713
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5bdf412780aad6b6bc055248dd1640e0d1a2282e1c11f28390eac7fae5fae303
|
| 3 |
size 5713
|
checkpoint-242/adapter_config.json
CHANGED
|
@@ -24,217 +24,217 @@
|
|
| 24 |
"megatron_core": "megatron.core",
|
| 25 |
"modules_to_save": null,
|
| 26 |
"peft_type": "LORA",
|
| 27 |
-
"peft_version": "0.18.2.dev0@
|
| 28 |
"qalora_group_size": 16,
|
| 29 |
"r": 16,
|
| 30 |
"rank_pattern": {},
|
| 31 |
"revision": null,
|
| 32 |
"target_modules": [
|
| 33 |
-
"model.language_model.layers.
|
| 34 |
-
"model.language_model.layers.
|
| 35 |
-
"model.language_model.layers.3.mlp.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
"model.language_model.layers.17.mlp.up_proj",
|
| 37 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
"model.language_model.layers.27.self_attn.k_proj",
|
| 39 |
-
"model.language_model.layers.28.mlp.down_proj",
|
| 40 |
-
"model.language_model.layers.6.mlp.up_proj",
|
| 41 |
-
"model.language_model.layers.24.self_attn.k_proj",
|
| 42 |
-
"model.language_model.layers.6.self_attn.q_proj",
|
| 43 |
-
"model.language_model.layers.17.self_attn.q_proj",
|
| 44 |
-
"model.language_model.layers.15.self_attn.k_proj",
|
| 45 |
-
"model.language_model.layers.24.mlp.up_proj",
|
| 46 |
-
"model.language_model.layers.19.mlp.gate_proj",
|
| 47 |
-
"model.language_model.layers.16.self_attn.k_proj",
|
| 48 |
-
"model.language_model.layers.26.self_attn.q_proj",
|
| 49 |
-
"model.language_model.layers.21.mlp.up_proj",
|
| 50 |
-
"model.language_model.layers.17.mlp.down_proj",
|
| 51 |
-
"model.language_model.layers.10.self_attn.v_proj",
|
| 52 |
-
"model.language_model.layers.25.mlp.down_proj",
|
| 53 |
-
"model.language_model.layers.11.mlp.up_proj",
|
| 54 |
-
"model.language_model.layers.2.self_attn.o_proj",
|
| 55 |
-
"model.language_model.layers.15.mlp.down_proj",
|
| 56 |
-
"model.language_model.layers.10.self_attn.k_proj",
|
| 57 |
-
"model.language_model.layers.15.self_attn.q_proj",
|
| 58 |
-
"model.language_model.layers.9.self_attn.v_proj",
|
| 59 |
-
"model.language_model.layers.27.self_attn.o_proj",
|
| 60 |
-
"model.language_model.layers.3.self_attn.v_proj",
|
| 61 |
-
"model.language_model.layers.10.self_attn.q_proj",
|
| 62 |
-
"model.language_model.layers.21.mlp.gate_proj",
|
| 63 |
-
"model.language_model.layers.25.self_attn.q_proj",
|
| 64 |
"model.language_model.layers.5.self_attn.o_proj",
|
|
|
|
|
|
|
| 65 |
"model.language_model.layers.2.mlp.gate_proj",
|
| 66 |
-
"model.language_model.layers.9.mlp.gate_proj",
|
| 67 |
-
"model.language_model.layers.19.self_attn.v_proj",
|
| 68 |
-
"model.language_model.layers.18.self_attn.k_proj",
|
| 69 |
-
"model.language_model.layers.19.mlp.down_proj",
|
| 70 |
-
"model.language_model.layers.23.self_attn.o_proj",
|
| 71 |
-
"model.language_model.layers.27.mlp.gate_proj",
|
| 72 |
-
"model.language_model.layers.0.mlp.up_proj",
|
| 73 |
-
"model.language_model.layers.20.mlp.gate_proj",
|
| 74 |
-
"model.language_model.layers.28.self_attn.o_proj",
|
| 75 |
-
"model.language_model.layers.4.self_attn.o_proj",
|
| 76 |
-
"model.language_model.layers.28.self_attn.v_proj",
|
| 77 |
-
"model.language_model.layers.11.self_attn.q_proj",
|
| 78 |
"model.language_model.layers.26.self_attn.o_proj",
|
| 79 |
-
"model.language_model.layers.9.mlp.down_proj",
|
| 80 |
-
"model.language_model.layers.27.self_attn.v_proj",
|
| 81 |
-
"model.language_model.layers.23.mlp.up_proj",
|
| 82 |
-
"model.language_model.layers.2.mlp.up_proj",
|
| 83 |
-
"model.language_model.layers.0.mlp.gate_proj",
|
| 84 |
-
"model.language_model.layers.18.self_attn.o_proj",
|
| 85 |
-
"model.language_model.layers.19.self_attn.k_proj",
|
| 86 |
-
"model.language_model.layers.10.mlp.down_proj",
|
| 87 |
-
"model.language_model.layers.10.mlp.gate_proj",
|
| 88 |
-
"model.language_model.layers.0.self_attn.o_proj",
|
| 89 |
-
"model.language_model.layers.20.mlp.down_proj",
|
| 90 |
-
"model.language_model.layers.10.self_attn.o_proj",
|
| 91 |
-
"model.language_model.layers.15.self_attn.o_proj",
|
| 92 |
-
"model.language_model.layers.18.mlp.down_proj",
|
| 93 |
-
"model.language_model.layers.1.self_attn.v_proj",
|
| 94 |
-
"model.language_model.layers.13.self_attn.q_proj",
|
| 95 |
-
"model.language_model.layers.18.self_attn.q_proj",
|
| 96 |
-
"model.language_model.layers.3.mlp.down_proj",
|
| 97 |
-
"model.language_model.layers.20.self_attn.k_proj",
|
| 98 |
-
"model.language_model.layers.14.self_attn.o_proj",
|
| 99 |
-
"model.language_model.layers.7.mlp.down_proj",
|
| 100 |
-
"model.language_model.layers.25.self_attn.v_proj",
|
| 101 |
-
"model.language_model.layers.29.mlp.gate_proj",
|
| 102 |
-
"model.language_model.layers.2.self_attn.k_proj",
|
| 103 |
-
"model.language_model.layers.5.self_attn.k_proj",
|
| 104 |
-
"model.language_model.layers.9.self_attn.k_proj",
|
| 105 |
-
"model.language_model.layers.1.mlp.gate_proj",
|
| 106 |
-
"model.language_model.layers.8.self_attn.o_proj",
|
| 107 |
-
"model.language_model.layers.22.self_attn.k_proj",
|
| 108 |
-
"model.language_model.layers.3.self_attn.q_proj",
|
| 109 |
-
"model.language_model.layers.23.self_attn.k_proj",
|
| 110 |
-
"model.language_model.layers.3.self_attn.k_proj",
|
| 111 |
-
"model.language_model.layers.19.self_attn.q_proj",
|
| 112 |
-
"model.language_model.layers.18.self_attn.v_proj",
|
| 113 |
-
"model.language_model.layers.10.mlp.up_proj",
|
| 114 |
-
"model.language_model.layers.11.mlp.gate_proj",
|
| 115 |
-
"model.language_model.layers.1.mlp.up_proj",
|
| 116 |
-
"model.language_model.layers.18.mlp.gate_proj",
|
| 117 |
-
"model.language_model.layers.8.mlp.gate_proj",
|
| 118 |
"model.language_model.layers.7.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119 |
"model.language_model.layers.8.mlp.up_proj",
|
| 120 |
-
"model.language_model.layers.
|
| 121 |
-
"model.language_model.layers.14.self_attn.k_proj",
|
| 122 |
-
"model.language_model.layers.22.self_attn.q_proj",
|
| 123 |
-
"model.language_model.layers.4.mlp.down_proj",
|
| 124 |
-
"model.language_model.layers.22.mlp.gate_proj",
|
| 125 |
-
"model.language_model.layers.15.self_attn.v_proj",
|
| 126 |
-
"model.language_model.layers.21.self_attn.o_proj",
|
| 127 |
-
"model.language_model.layers.11.self_attn.o_proj",
|
| 128 |
-
"model.language_model.layers.20.mlp.up_proj",
|
| 129 |
-
"model.language_model.layers.16.self_attn.q_proj",
|
| 130 |
-
"model.language_model.layers.1.self_attn.k_proj",
|
| 131 |
-
"model.language_model.layers.24.mlp.gate_proj",
|
| 132 |
-
"model.language_model.layers.26.mlp.gate_proj",
|
| 133 |
"model.language_model.layers.2.self_attn.q_proj",
|
| 134 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
"model.language_model.layers.7.self_attn.q_proj",
|
| 136 |
-
"model.language_model.layers.
|
| 137 |
-
"model.language_model.layers.27.self_attn.q_proj",
|
| 138 |
-
"model.language_model.layers.29.mlp.up_proj",
|
| 139 |
-
"model.language_model.layers.28.self_attn.k_proj",
|
| 140 |
-
"model.language_model.layers.24.self_attn.o_proj",
|
| 141 |
-
"model.language_model.layers.26.self_attn.k_proj",
|
| 142 |
-
"model.language_model.layers.21.mlp.down_proj",
|
| 143 |
-
"model.language_model.layers.14.mlp.gate_proj",
|
| 144 |
-
"model.language_model.layers.25.mlp.up_proj",
|
| 145 |
"model.language_model.layers.27.mlp.down_proj",
|
| 146 |
-
"model.language_model.layers.20.self_attn.v_proj",
|
| 147 |
-
"model.language_model.layers.0.mlp.down_proj",
|
| 148 |
-
"model.language_model.layers.6.self_attn.v_proj",
|
| 149 |
-
"model.language_model.layers.4.self_attn.q_proj",
|
| 150 |
-
"model.language_model.layers.9.self_attn.q_proj",
|
| 151 |
-
"model.language_model.layers.0.self_attn.q_proj",
|
| 152 |
-
"model.language_model.layers.27.mlp.up_proj",
|
| 153 |
-
"model.language_model.layers.29.self_attn.k_proj",
|
| 154 |
-
"model.language_model.layers.29.self_attn.q_proj",
|
| 155 |
-
"model.language_model.layers.12.mlp.up_proj",
|
| 156 |
-
"model.language_model.layers.6.mlp.down_proj",
|
| 157 |
"model.language_model.layers.2.mlp.down_proj",
|
|
|
|
| 158 |
"model.language_model.layers.6.mlp.gate_proj",
|
| 159 |
-
"model.language_model.layers.24.self_attn.v_proj",
|
| 160 |
-
"model.language_model.layers.4.mlp.up_proj",
|
| 161 |
"model.language_model.layers.9.self_attn.o_proj",
|
| 162 |
-
"model.language_model.layers.
|
| 163 |
-
"model.language_model.layers.
|
| 164 |
-
"model.language_model.layers.
|
| 165 |
-
"model.language_model.layers.
|
| 166 |
-
"model.language_model.layers.
|
| 167 |
-
"model.language_model.layers.
|
| 168 |
-
"model.language_model.layers.
|
| 169 |
-
"model.language_model.layers.
|
| 170 |
-
"model.language_model.layers.21.self_attn.q_proj",
|
| 171 |
-
"model.language_model.layers.15.mlp.up_proj",
|
| 172 |
-
"model.language_model.layers.26.mlp.up_proj",
|
| 173 |
-
"model.language_model.layers.26.mlp.down_proj",
|
| 174 |
-
"model.language_model.layers.25.self_attn.o_proj",
|
| 175 |
"model.language_model.layers.8.self_attn.v_proj",
|
| 176 |
-
"model.language_model.layers.
|
| 177 |
-
"model.language_model.layers.
|
| 178 |
-
"model.language_model.layers.
|
| 179 |
-
"model.language_model.layers.
|
| 180 |
-
"model.language_model.layers.
|
| 181 |
-
"model.language_model.layers.
|
| 182 |
"model.language_model.layers.3.mlp.gate_proj",
|
| 183 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 184 |
"model.language_model.layers.9.mlp.up_proj",
|
| 185 |
-
"model.language_model.layers.
|
| 186 |
-
"model.language_model.layers.
|
| 187 |
-
"model.language_model.layers.
|
| 188 |
-
"model.language_model.layers.16.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 189 |
"model.language_model.layers.8.self_attn.k_proj",
|
| 190 |
-
"model.language_model.layers.12.
|
| 191 |
-
"model.language_model.layers.7.self_attn.o_proj",
|
| 192 |
-
"model.language_model.layers.18.mlp.up_proj",
|
| 193 |
-
"model.language_model.layers.13.mlp.up_proj",
|
| 194 |
-
"model.language_model.layers.16.mlp.up_proj",
|
| 195 |
-
"model.language_model.layers.17.self_attn.k_proj",
|
| 196 |
-
"model.language_model.layers.25.self_attn.k_proj",
|
| 197 |
-
"model.language_model.layers.8.self_attn.q_proj",
|
| 198 |
"model.language_model.layers.4.self_attn.v_proj",
|
| 199 |
-
"model.language_model.layers.
|
| 200 |
-
"model.language_model.layers.
|
| 201 |
-
"model.language_model.layers.
|
| 202 |
-
"model.language_model.layers.13.self_attn.k_proj",
|
| 203 |
-
"model.language_model.layers.7.self_attn.k_proj",
|
| 204 |
-
"model.language_model.layers.22.self_attn.o_proj",
|
| 205 |
-
"model.language_model.layers.22.mlp.up_proj",
|
| 206 |
-
"model.language_model.layers.16.self_attn.o_proj",
|
| 207 |
-
"model.language_model.layers.24.self_attn.q_proj",
|
| 208 |
-
"model.language_model.layers.12.self_attn.q_proj",
|
| 209 |
"model.language_model.layers.2.self_attn.v_proj",
|
| 210 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 211 |
"model.language_model.layers.13.mlp.gate_proj",
|
| 212 |
-
"model.language_model.layers.
|
| 213 |
-
"model.language_model.layers.
|
| 214 |
-
"model.language_model.layers.26.self_attn.v_proj",
|
| 215 |
-
"model.language_model.layers.28.mlp.up_proj",
|
| 216 |
-
"model.language_model.layers.19.mlp.up_proj",
|
| 217 |
-
"model.language_model.layers.16.mlp.gate_proj",
|
| 218 |
-
"model.language_model.layers.7.self_attn.v_proj",
|
| 219 |
-
"model.language_model.layers.25.mlp.gate_proj",
|
| 220 |
-
"model.language_model.layers.13.self_attn.v_proj",
|
| 221 |
"model.language_model.layers.20.self_attn.q_proj",
|
| 222 |
-
"model.language_model.layers.
|
| 223 |
-
"model.language_model.layers.
|
| 224 |
-
"model.language_model.layers.11.mlp.down_proj",
|
| 225 |
-
"model.language_model.layers.0.self_attn.k_proj",
|
| 226 |
-
"model.language_model.layers.21.self_attn.v_proj",
|
| 227 |
-
"model.language_model.layers.28.self_attn.q_proj",
|
| 228 |
"model.language_model.layers.29.self_attn.o_proj",
|
| 229 |
"model.language_model.layers.11.self_attn.k_proj",
|
| 230 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 231 |
"model.language_model.layers.7.mlp.up_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 232 |
"model.language_model.layers.22.mlp.down_proj",
|
|
|
|
|
|
|
| 233 |
"model.language_model.layers.20.self_attn.o_proj",
|
| 234 |
-
"model.language_model.layers.
|
| 235 |
-
"model.language_model.layers.
|
| 236 |
-
"model.language_model.layers.
|
| 237 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 238 |
],
|
| 239 |
"target_parameters": null,
|
| 240 |
"task_type": "CAUSAL_LM",
|
|
|
|
| 24 |
"megatron_core": "megatron.core",
|
| 25 |
"modules_to_save": null,
|
| 26 |
"peft_type": "LORA",
|
| 27 |
+
"peft_version": "0.18.2.dev0@7a4b07f2070162972f8c0515bc3acd19f81c0ad7",
|
| 28 |
"qalora_group_size": 16,
|
| 29 |
"r": 16,
|
| 30 |
"rank_pattern": {},
|
| 31 |
"revision": null,
|
| 32 |
"target_modules": [
|
| 33 |
+
"model.language_model.layers.17.self_attn.q_proj",
|
| 34 |
+
"model.language_model.layers.7.self_attn.o_proj",
|
| 35 |
+
"model.language_model.layers.3.mlp.down_proj",
|
| 36 |
+
"model.language_model.layers.14.mlp.up_proj",
|
| 37 |
+
"model.language_model.layers.17.self_attn.k_proj",
|
| 38 |
+
"model.language_model.layers.25.self_attn.o_proj",
|
| 39 |
+
"model.language_model.layers.6.self_attn.q_proj",
|
| 40 |
+
"model.language_model.layers.5.self_attn.q_proj",
|
| 41 |
+
"model.language_model.layers.1.mlp.gate_proj",
|
| 42 |
"model.language_model.layers.17.mlp.up_proj",
|
| 43 |
+
"model.language_model.layers.5.self_attn.k_proj",
|
| 44 |
+
"model.language_model.layers.16.self_attn.o_proj",
|
| 45 |
+
"model.language_model.layers.18.mlp.up_proj",
|
| 46 |
+
"model.language_model.layers.25.self_attn.k_proj",
|
| 47 |
+
"model.language_model.layers.23.mlp.down_proj",
|
| 48 |
+
"model.language_model.layers.27.mlp.up_proj",
|
| 49 |
"model.language_model.layers.27.self_attn.k_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
"model.language_model.layers.5.self_attn.o_proj",
|
| 51 |
+
"model.language_model.layers.22.self_attn.k_proj",
|
| 52 |
+
"model.language_model.layers.1.mlp.down_proj",
|
| 53 |
"model.language_model.layers.2.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
"model.language_model.layers.26.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
"model.language_model.layers.7.mlp.gate_proj",
|
| 56 |
+
"model.language_model.layers.24.self_attn.q_proj",
|
| 57 |
+
"model.language_model.layers.3.self_attn.o_proj",
|
| 58 |
+
"model.language_model.layers.0.self_attn.q_proj",
|
| 59 |
+
"model.language_model.layers.21.self_attn.k_proj",
|
| 60 |
+
"model.language_model.layers.23.self_attn.o_proj",
|
| 61 |
+
"model.language_model.layers.9.self_attn.q_proj",
|
| 62 |
+
"model.language_model.layers.5.mlp.gate_proj",
|
| 63 |
+
"model.language_model.layers.10.self_attn.v_proj",
|
| 64 |
"model.language_model.layers.8.mlp.up_proj",
|
| 65 |
+
"model.language_model.layers.26.self_attn.v_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
"model.language_model.layers.2.self_attn.q_proj",
|
| 67 |
+
"model.language_model.layers.13.self_attn.o_proj",
|
| 68 |
+
"model.language_model.layers.7.mlp.down_proj",
|
| 69 |
+
"model.language_model.layers.24.mlp.down_proj",
|
| 70 |
+
"model.language_model.layers.6.self_attn.k_proj",
|
| 71 |
+
"model.language_model.layers.0.self_attn.k_proj",
|
| 72 |
+
"model.language_model.layers.1.mlp.up_proj",
|
| 73 |
+
"model.language_model.layers.28.mlp.down_proj",
|
| 74 |
+
"model.language_model.layers.2.self_attn.k_proj",
|
| 75 |
+
"model.language_model.layers.22.mlp.up_proj",
|
| 76 |
"model.language_model.layers.7.self_attn.q_proj",
|
| 77 |
+
"model.language_model.layers.22.self_attn.q_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
"model.language_model.layers.27.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
"model.language_model.layers.2.mlp.down_proj",
|
| 80 |
+
"model.language_model.layers.19.mlp.down_proj",
|
| 81 |
"model.language_model.layers.6.mlp.gate_proj",
|
|
|
|
|
|
|
| 82 |
"model.language_model.layers.9.self_attn.o_proj",
|
| 83 |
+
"model.language_model.layers.15.mlp.down_proj",
|
| 84 |
+
"model.language_model.layers.4.self_attn.o_proj",
|
| 85 |
+
"model.language_model.layers.29.self_attn.k_proj",
|
| 86 |
+
"model.language_model.layers.18.self_attn.q_proj",
|
| 87 |
+
"model.language_model.layers.11.mlp.down_proj",
|
| 88 |
+
"model.language_model.layers.26.mlp.gate_proj",
|
| 89 |
+
"model.language_model.layers.23.mlp.up_proj",
|
| 90 |
+
"model.language_model.layers.0.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
"model.language_model.layers.8.self_attn.v_proj",
|
| 92 |
+
"model.language_model.layers.14.self_attn.k_proj",
|
| 93 |
+
"model.language_model.layers.21.mlp.up_proj",
|
| 94 |
+
"model.language_model.layers.10.self_attn.o_proj",
|
| 95 |
+
"model.language_model.layers.24.mlp.gate_proj",
|
| 96 |
+
"model.language_model.layers.28.mlp.up_proj",
|
| 97 |
+
"model.language_model.layers.29.mlp.down_proj",
|
| 98 |
"model.language_model.layers.3.mlp.gate_proj",
|
| 99 |
+
"model.language_model.layers.8.mlp.down_proj",
|
| 100 |
+
"model.language_model.layers.9.mlp.down_proj",
|
| 101 |
+
"model.language_model.layers.18.mlp.down_proj",
|
| 102 |
+
"model.language_model.layers.19.mlp.gate_proj",
|
| 103 |
+
"model.language_model.layers.26.mlp.down_proj",
|
| 104 |
+
"model.language_model.layers.9.self_attn.v_proj",
|
| 105 |
"model.language_model.layers.9.mlp.up_proj",
|
| 106 |
+
"model.language_model.layers.10.self_attn.q_proj",
|
| 107 |
+
"model.language_model.layers.11.self_attn.q_proj",
|
| 108 |
+
"model.language_model.layers.18.mlp.gate_proj",
|
| 109 |
+
"model.language_model.layers.16.self_attn.v_proj",
|
| 110 |
+
"model.language_model.layers.1.self_attn.k_proj",
|
| 111 |
+
"model.language_model.layers.25.mlp.up_proj",
|
| 112 |
+
"model.language_model.layers.28.self_attn.v_proj",
|
| 113 |
+
"model.language_model.layers.15.mlp.gate_proj",
|
| 114 |
+
"model.language_model.layers.9.self_attn.k_proj",
|
| 115 |
+
"model.language_model.layers.27.mlp.gate_proj",
|
| 116 |
+
"model.language_model.layers.14.self_attn.o_proj",
|
| 117 |
+
"model.language_model.layers.22.mlp.gate_proj",
|
| 118 |
+
"model.language_model.layers.14.mlp.down_proj",
|
| 119 |
"model.language_model.layers.8.self_attn.k_proj",
|
| 120 |
+
"model.language_model.layers.12.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
"model.language_model.layers.4.self_attn.v_proj",
|
| 122 |
+
"model.language_model.layers.10.mlp.down_proj",
|
| 123 |
+
"model.language_model.layers.24.mlp.up_proj",
|
| 124 |
+
"model.language_model.layers.25.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
"model.language_model.layers.2.self_attn.v_proj",
|
| 126 |
+
"model.language_model.layers.4.self_attn.k_proj",
|
| 127 |
+
"model.language_model.layers.8.self_attn.q_proj",
|
| 128 |
+
"model.language_model.layers.18.self_attn.v_proj",
|
| 129 |
+
"model.language_model.layers.27.self_attn.o_proj",
|
| 130 |
+
"model.language_model.layers.16.self_attn.q_proj",
|
| 131 |
+
"model.language_model.layers.3.mlp.up_proj",
|
| 132 |
"model.language_model.layers.13.mlp.gate_proj",
|
| 133 |
+
"model.language_model.layers.17.mlp.down_proj",
|
| 134 |
+
"model.language_model.layers.28.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
"model.language_model.layers.20.self_attn.q_proj",
|
| 136 |
+
"model.language_model.layers.0.mlp.up_proj",
|
| 137 |
+
"model.language_model.layers.16.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
| 138 |
"model.language_model.layers.29.self_attn.o_proj",
|
| 139 |
"model.language_model.layers.11.self_attn.k_proj",
|
| 140 |
+
"model.language_model.layers.20.self_attn.v_proj",
|
| 141 |
+
"model.language_model.layers.14.self_attn.v_proj",
|
| 142 |
+
"model.language_model.layers.11.mlp.gate_proj",
|
| 143 |
+
"model.language_model.layers.21.mlp.down_proj",
|
| 144 |
+
"model.language_model.layers.12.mlp.up_proj",
|
| 145 |
+
"model.language_model.layers.10.mlp.gate_proj",
|
| 146 |
+
"model.language_model.layers.10.self_attn.k_proj",
|
| 147 |
+
"model.language_model.layers.27.self_attn.q_proj",
|
| 148 |
+
"model.language_model.layers.8.mlp.gate_proj",
|
| 149 |
+
"model.language_model.layers.19.self_attn.q_proj",
|
| 150 |
+
"model.language_model.layers.23.self_attn.k_proj",
|
| 151 |
+
"model.language_model.layers.13.self_attn.q_proj",
|
| 152 |
+
"model.language_model.layers.0.self_attn.v_proj",
|
| 153 |
+
"model.language_model.layers.8.self_attn.o_proj",
|
| 154 |
+
"model.language_model.layers.0.mlp.gate_proj",
|
| 155 |
+
"model.language_model.layers.17.mlp.gate_proj",
|
| 156 |
+
"model.language_model.layers.1.self_attn.o_proj",
|
| 157 |
+
"model.language_model.layers.14.self_attn.q_proj",
|
| 158 |
+
"model.language_model.layers.14.mlp.gate_proj",
|
| 159 |
+
"model.language_model.layers.12.mlp.down_proj",
|
| 160 |
+
"model.language_model.layers.21.self_attn.o_proj",
|
| 161 |
+
"model.language_model.layers.5.mlp.up_proj",
|
| 162 |
+
"model.language_model.layers.20.mlp.up_proj",
|
| 163 |
+
"model.language_model.layers.13.mlp.up_proj",
|
| 164 |
+
"model.language_model.layers.18.self_attn.k_proj",
|
| 165 |
+
"model.language_model.layers.23.mlp.gate_proj",
|
| 166 |
+
"model.language_model.layers.4.mlp.down_proj",
|
| 167 |
+
"model.language_model.layers.24.self_attn.o_proj",
|
| 168 |
+
"model.language_model.layers.28.self_attn.k_proj",
|
| 169 |
+
"model.language_model.layers.13.self_attn.v_proj",
|
| 170 |
+
"model.language_model.layers.6.mlp.down_proj",
|
| 171 |
+
"model.language_model.layers.13.mlp.down_proj",
|
| 172 |
+
"model.language_model.layers.21.self_attn.q_proj",
|
| 173 |
+
"model.language_model.layers.10.mlp.up_proj",
|
| 174 |
+
"model.language_model.layers.15.self_attn.v_proj",
|
| 175 |
+
"model.language_model.layers.0.self_attn.o_proj",
|
| 176 |
+
"model.language_model.layers.9.mlp.gate_proj",
|
| 177 |
+
"model.language_model.layers.16.mlp.up_proj",
|
| 178 |
+
"model.language_model.layers.11.self_attn.o_proj",
|
| 179 |
+
"model.language_model.layers.17.self_attn.o_proj",
|
| 180 |
+
"model.language_model.layers.20.mlp.gate_proj",
|
| 181 |
+
"model.language_model.layers.26.mlp.up_proj",
|
| 182 |
+
"model.language_model.layers.15.mlp.up_proj",
|
| 183 |
+
"model.language_model.layers.12.mlp.gate_proj",
|
| 184 |
+
"model.language_model.layers.22.self_attn.o_proj",
|
| 185 |
+
"model.language_model.layers.28.mlp.gate_proj",
|
| 186 |
+
"model.language_model.layers.21.mlp.gate_proj",
|
| 187 |
+
"model.language_model.layers.2.mlp.up_proj",
|
| 188 |
+
"model.language_model.layers.28.self_attn.q_proj",
|
| 189 |
+
"model.language_model.layers.29.self_attn.q_proj",
|
| 190 |
"model.language_model.layers.7.mlp.up_proj",
|
| 191 |
+
"model.language_model.layers.15.self_attn.q_proj",
|
| 192 |
+
"model.language_model.layers.19.self_attn.k_proj",
|
| 193 |
+
"model.language_model.layers.7.self_attn.v_proj",
|
| 194 |
+
"model.language_model.layers.29.mlp.gate_proj",
|
| 195 |
+
"model.language_model.layers.24.self_attn.k_proj",
|
| 196 |
+
"model.language_model.layers.16.mlp.gate_proj",
|
| 197 |
+
"model.language_model.layers.12.self_attn.k_proj",
|
| 198 |
+
"model.language_model.layers.4.mlp.up_proj",
|
| 199 |
+
"model.language_model.layers.20.mlp.down_proj",
|
| 200 |
+
"model.language_model.layers.5.mlp.down_proj",
|
| 201 |
"model.language_model.layers.22.mlp.down_proj",
|
| 202 |
+
"model.language_model.layers.3.self_attn.q_proj",
|
| 203 |
+
"model.language_model.layers.26.self_attn.k_proj",
|
| 204 |
"model.language_model.layers.20.self_attn.o_proj",
|
| 205 |
+
"model.language_model.layers.24.self_attn.v_proj",
|
| 206 |
+
"model.language_model.layers.21.self_attn.v_proj",
|
| 207 |
+
"model.language_model.layers.19.self_attn.o_proj",
|
| 208 |
+
"model.language_model.layers.29.mlp.up_proj",
|
| 209 |
+
"model.language_model.layers.13.self_attn.k_proj",
|
| 210 |
+
"model.language_model.layers.2.self_attn.o_proj",
|
| 211 |
+
"model.language_model.layers.16.self_attn.k_proj",
|
| 212 |
+
"model.language_model.layers.22.self_attn.v_proj",
|
| 213 |
+
"model.language_model.layers.25.self_attn.v_proj",
|
| 214 |
+
"model.language_model.layers.25.mlp.down_proj",
|
| 215 |
+
"model.language_model.layers.4.mlp.gate_proj",
|
| 216 |
+
"model.language_model.layers.6.self_attn.o_proj",
|
| 217 |
+
"model.language_model.layers.25.self_attn.q_proj",
|
| 218 |
+
"model.language_model.layers.7.self_attn.k_proj",
|
| 219 |
+
"model.language_model.layers.11.mlp.up_proj",
|
| 220 |
+
"model.language_model.layers.20.self_attn.k_proj",
|
| 221 |
+
"model.language_model.layers.6.mlp.up_proj",
|
| 222 |
+
"model.language_model.layers.15.self_attn.k_proj",
|
| 223 |
+
"model.language_model.layers.19.mlp.up_proj",
|
| 224 |
+
"model.language_model.layers.12.self_attn.q_proj",
|
| 225 |
+
"model.language_model.layers.4.self_attn.q_proj",
|
| 226 |
+
"model.language_model.layers.18.self_attn.o_proj",
|
| 227 |
+
"model.language_model.layers.1.self_attn.v_proj",
|
| 228 |
+
"model.language_model.layers.15.self_attn.o_proj",
|
| 229 |
+
"model.language_model.layers.19.self_attn.v_proj",
|
| 230 |
+
"model.language_model.layers.6.self_attn.v_proj",
|
| 231 |
+
"model.language_model.layers.12.self_attn.v_proj",
|
| 232 |
+
"model.language_model.layers.3.self_attn.k_proj",
|
| 233 |
+
"model.language_model.layers.26.self_attn.q_proj",
|
| 234 |
+
"model.language_model.layers.1.self_attn.q_proj",
|
| 235 |
+
"model.language_model.layers.27.self_attn.v_proj",
|
| 236 |
+
"model.language_model.layers.3.self_attn.v_proj",
|
| 237 |
+
"model.language_model.layers.23.self_attn.q_proj"
|
| 238 |
],
|
| 239 |
"target_parameters": null,
|
| 240 |
"task_type": "CAUSAL_LM",
|
checkpoint-242/adapter_model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 37232104
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:60ea5f7283321d8560f953cfc7a6167372abf99e9234771f4aba7dfebccfd34d
|
| 3 |
size 37232104
|
checkpoint-242/optimizer.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9335e6ec0028f7338bdc2494df22f046fc2a105e9276c73cd535e9d8829a1a61
|
| 3 |
+
size 38237839
|
checkpoint-242/rng_state.pth
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 14645
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a243edb8b1db34fb7115e5de0e37c8594f6006258a70188264a63e2194320e48
|
| 3 |
size 14645
|
checkpoint-242/scheduler.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1465
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eb3700bbbbe712ec1083cf3ee091f9a368b7965703741f5d69ab3e651526ba31
|
| 3 |
size 1465
|
checkpoint-242/tokenizer.json
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cc8d3a0ce36466ccc1278bf987df5f71db1719b9ca6b4118264f45cb627bfe0f
|
| 3 |
+
size 32169626
|
checkpoint-242/tokenizer_config.json
CHANGED
|
@@ -41,7 +41,7 @@
|
|
| 41 |
"think_token": "<|think|>"
|
| 42 |
},
|
| 43 |
"pad_token": "<pad>",
|
| 44 |
-
"padding_side": "
|
| 45 |
"processor_class": "Gemma4Processor",
|
| 46 |
"response_schema": {
|
| 47 |
"properties": {
|
|
|
|
| 41 |
"think_token": "<|think|>"
|
| 42 |
},
|
| 43 |
"pad_token": "<pad>",
|
| 44 |
+
"padding_side": "right",
|
| 45 |
"processor_class": "Gemma4Processor",
|
| 46 |
"response_schema": {
|
| 47 |
"properties": {
|
checkpoint-242/trainer_state.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
{
|
| 2 |
"best_global_step": 242,
|
| 3 |
-
"best_metric": 0.
|
| 4 |
-
"best_model_checkpoint": "/
|
| 5 |
"epoch": 2.0,
|
| 6 |
"eval_steps": 500,
|
| 7 |
"global_step": 242,
|
|
@@ -10,265 +10,265 @@
|
|
| 10 |
"is_world_process_zero": true,
|
| 11 |
"log_history": [
|
| 12 |
{
|
| 13 |
-
"entropy": 1.
|
| 14 |
"epoch": 0.0827300930713547,
|
| 15 |
-
"grad_norm":
|
| 16 |
"learning_rate": 1.8e-05,
|
| 17 |
-
"loss":
|
| 18 |
-
"mean_token_accuracy": 0.
|
| 19 |
-
"num_tokens":
|
| 20 |
"step": 10
|
| 21 |
},
|
| 22 |
{
|
| 23 |
-
"entropy": 0.
|
| 24 |
"epoch": 0.1654601861427094,
|
| 25 |
-
"grad_norm":
|
| 26 |
"learning_rate": 3.8e-05,
|
| 27 |
-
"loss":
|
| 28 |
-
"mean_token_accuracy": 0.
|
| 29 |
-
"num_tokens":
|
| 30 |
"step": 20
|
| 31 |
},
|
| 32 |
{
|
| 33 |
-
"entropy": 0.
|
| 34 |
"epoch": 0.2481902792140641,
|
| 35 |
-
"grad_norm":
|
| 36 |
"learning_rate": 5.8e-05,
|
| 37 |
-
"loss":
|
| 38 |
-
"mean_token_accuracy": 0.
|
| 39 |
-
"num_tokens":
|
| 40 |
"step": 30
|
| 41 |
},
|
| 42 |
{
|
| 43 |
-
"entropy":
|
| 44 |
"epoch": 0.3309203722854188,
|
| 45 |
-
"grad_norm":
|
| 46 |
"learning_rate": 7.800000000000001e-05,
|
| 47 |
-
"loss":
|
| 48 |
-
"mean_token_accuracy": 0.
|
| 49 |
-
"num_tokens":
|
| 50 |
"step": 40
|
| 51 |
},
|
| 52 |
{
|
| 53 |
-
"entropy":
|
| 54 |
"epoch": 0.4136504653567735,
|
| 55 |
-
"grad_norm":
|
| 56 |
"learning_rate": 9.8e-05,
|
| 57 |
-
"loss":
|
| 58 |
-
"mean_token_accuracy": 0.
|
| 59 |
-
"num_tokens":
|
| 60 |
"step": 50
|
| 61 |
},
|
| 62 |
{
|
| 63 |
-
"entropy": 0.
|
| 64 |
"epoch": 0.4963805584281282,
|
| 65 |
-
"grad_norm":
|
| 66 |
"learning_rate": 0.000118,
|
| 67 |
-
"loss":
|
| 68 |
-
"mean_token_accuracy": 0.
|
| 69 |
-
"num_tokens":
|
| 70 |
"step": 60
|
| 71 |
},
|
| 72 |
{
|
| 73 |
-
"entropy": 0.
|
| 74 |
"epoch": 0.5791106514994829,
|
| 75 |
-
"grad_norm":
|
| 76 |
"learning_rate": 0.000138,
|
| 77 |
-
"loss":
|
| 78 |
-
"mean_token_accuracy": 0.
|
| 79 |
-
"num_tokens":
|
| 80 |
"step": 70
|
| 81 |
},
|
| 82 |
{
|
| 83 |
-
"entropy": 0.
|
| 84 |
"epoch": 0.6618407445708376,
|
| 85 |
-
"grad_norm":
|
| 86 |
"learning_rate": 0.00015800000000000002,
|
| 87 |
-
"loss":
|
| 88 |
-
"mean_token_accuracy": 0.
|
| 89 |
-
"num_tokens":
|
| 90 |
"step": 80
|
| 91 |
},
|
| 92 |
{
|
| 93 |
-
"entropy": 0.
|
| 94 |
"epoch": 0.7445708376421923,
|
| 95 |
-
"grad_norm":
|
| 96 |
"learning_rate": 0.00017800000000000002,
|
| 97 |
-
"loss":
|
| 98 |
-
"mean_token_accuracy": 0.
|
| 99 |
-
"num_tokens":
|
| 100 |
"step": 90
|
| 101 |
},
|
| 102 |
{
|
| 103 |
-
"entropy": 0.
|
| 104 |
"epoch": 0.827300930713547,
|
| 105 |
-
"grad_norm": 0.
|
| 106 |
"learning_rate": 0.00019800000000000002,
|
| 107 |
-
"loss":
|
| 108 |
-
"mean_token_accuracy": 0.
|
| 109 |
-
"num_tokens":
|
| 110 |
"step": 100
|
| 111 |
},
|
| 112 |
{
|
| 113 |
-
"entropy": 0.
|
| 114 |
"epoch": 0.9100310237849017,
|
| 115 |
-
"grad_norm": 0.
|
| 116 |
"learning_rate": 0.00019942266891397815,
|
| 117 |
-
"loss":
|
| 118 |
-
"mean_token_accuracy": 0.
|
| 119 |
-
"num_tokens":
|
| 120 |
"step": 110
|
| 121 |
},
|
| 122 |
{
|
| 123 |
-
"entropy": 0.
|
| 124 |
"epoch": 0.9927611168562565,
|
| 125 |
-
"grad_norm": 0.
|
| 126 |
"learning_rate": 0.00019743551343638324,
|
| 127 |
-
"loss":
|
| 128 |
-
"mean_token_accuracy": 0.
|
| 129 |
-
"num_tokens":
|
| 130 |
"step": 120
|
| 131 |
},
|
| 132 |
{
|
| 133 |
"epoch": 1.0,
|
| 134 |
-
"eval_entropy": 0.
|
| 135 |
-
"eval_loss": 0.
|
| 136 |
-
"eval_mean_token_accuracy": 0.
|
| 137 |
-
"eval_num_tokens":
|
| 138 |
-
"eval_runtime":
|
| 139 |
-
"eval_samples_per_second":
|
| 140 |
-
"eval_steps_per_second":
|
| 141 |
"step": 121
|
| 142 |
},
|
| 143 |
{
|
| 144 |
-
"entropy": 0.
|
| 145 |
"epoch": 1.0744570837642193,
|
| 146 |
-
"grad_norm": 0.
|
| 147 |
"learning_rate": 0.00019405971991583108,
|
| 148 |
-
"loss":
|
| 149 |
-
"mean_token_accuracy": 0.
|
| 150 |
-
"num_tokens":
|
| 151 |
"step": 130
|
| 152 |
},
|
| 153 |
{
|
| 154 |
-
"entropy": 0.
|
| 155 |
"epoch": 1.157187176835574,
|
| 156 |
-
"grad_norm": 0.
|
| 157 |
"learning_rate": 0.00018934339971482674,
|
| 158 |
-
"loss":
|
| 159 |
-
"mean_token_accuracy": 0.
|
| 160 |
-
"num_tokens":
|
| 161 |
"step": 140
|
| 162 |
},
|
| 163 |
{
|
| 164 |
-
"entropy": 0.
|
| 165 |
"epoch": 1.2399172699069285,
|
| 166 |
-
"grad_norm":
|
| 167 |
"learning_rate": 0.00018335376920472097,
|
| 168 |
-
"loss":
|
| 169 |
-
"mean_token_accuracy": 0.
|
| 170 |
-
"num_tokens":
|
| 171 |
"step": 150
|
| 172 |
},
|
| 173 |
{
|
| 174 |
-
"entropy": 0.
|
| 175 |
"epoch": 1.3226473629782833,
|
| 176 |
-
"grad_norm": 0.
|
| 177 |
"learning_rate": 0.00017617619180688085,
|
| 178 |
-
"loss":
|
| 179 |
-
"mean_token_accuracy": 0.
|
| 180 |
-
"num_tokens":
|
| 181 |
"step": 160
|
| 182 |
},
|
| 183 |
{
|
| 184 |
-
"entropy": 0.
|
| 185 |
"epoch": 1.4053774560496382,
|
| 186 |
-
"grad_norm": 0.
|
| 187 |
"learning_rate": 0.00016791296140450545,
|
| 188 |
-
"loss":
|
| 189 |
-
"mean_token_accuracy": 0.
|
| 190 |
-
"num_tokens":
|
| 191 |
"step": 170
|
| 192 |
},
|
| 193 |
{
|
| 194 |
-
"entropy": 0.
|
| 195 |
"epoch": 1.4881075491209927,
|
| 196 |
-
"grad_norm": 0.
|
| 197 |
"learning_rate": 0.0001586818444637402,
|
| 198 |
-
"loss":
|
| 199 |
-
"mean_token_accuracy": 0.
|
| 200 |
-
"num_tokens":
|
| 201 |
"step": 180
|
| 202 |
},
|
| 203 |
{
|
| 204 |
-
"entropy": 0.
|
| 205 |
"epoch": 1.5708376421923473,
|
| 206 |
-
"grad_norm": 0.
|
| 207 |
"learning_rate": 0.0001486144016415862,
|
| 208 |
-
"loss":
|
| 209 |
-
"mean_token_accuracy": 0.
|
| 210 |
-
"num_tokens":
|
| 211 |
"step": 190
|
| 212 |
},
|
| 213 |
{
|
| 214 |
-
"entropy": 0.
|
| 215 |
"epoch": 1.6535677352637022,
|
| 216 |
-
"grad_norm": 0.
|
| 217 |
"learning_rate": 0.00013785411280082746,
|
| 218 |
-
"loss":
|
| 219 |
-
"mean_token_accuracy": 0.
|
| 220 |
-
"num_tokens":
|
| 221 |
"step": 200
|
| 222 |
},
|
| 223 |
{
|
| 224 |
-
"entropy": 0.
|
| 225 |
"epoch": 1.736297828335057,
|
| 226 |
-
"grad_norm": 0.
|
| 227 |
"learning_rate": 0.00012655433215401438,
|
| 228 |
-
"loss":
|
| 229 |
-
"mean_token_accuracy": 0.
|
| 230 |
-
"num_tokens":
|
| 231 |
"step": 210
|
| 232 |
},
|
| 233 |
{
|
| 234 |
-
"entropy": 0.
|
| 235 |
"epoch": 1.8190279214064116,
|
| 236 |
-
"grad_norm": 0.
|
| 237 |
"learning_rate": 0.00011487610267952142,
|
| 238 |
-
"loss":
|
| 239 |
-
"mean_token_accuracy": 0.
|
| 240 |
-
"num_tokens":
|
| 241 |
"step": 220
|
| 242 |
},
|
| 243 |
{
|
| 244 |
-
"entropy": 0.
|
| 245 |
"epoch": 1.9017580144777662,
|
| 246 |
-
"grad_norm": 0.
|
| 247 |
"learning_rate": 0.00010298586095833151,
|
| 248 |
-
"loss":
|
| 249 |
-
"mean_token_accuracy": 0.
|
| 250 |
-
"num_tokens":
|
| 251 |
"step": 230
|
| 252 |
},
|
| 253 |
{
|
| 254 |
-
"entropy": 0.
|
| 255 |
"epoch": 1.984488107549121,
|
| 256 |
-
"grad_norm": 0.
|
| 257 |
"learning_rate": 9.10530651419099e-05,
|
| 258 |
-
"loss":
|
| 259 |
-
"mean_token_accuracy": 0.
|
| 260 |
-
"num_tokens":
|
| 261 |
"step": 240
|
| 262 |
},
|
| 263 |
{
|
| 264 |
"epoch": 2.0,
|
| 265 |
-
"eval_entropy": 0.
|
| 266 |
-
"eval_loss": 0.
|
| 267 |
-
"eval_mean_token_accuracy": 0.
|
| 268 |
-
"eval_num_tokens":
|
| 269 |
-
"eval_runtime":
|
| 270 |
-
"eval_samples_per_second":
|
| 271 |
-
"eval_steps_per_second":
|
| 272 |
"step": 242
|
| 273 |
}
|
| 274 |
],
|
|
@@ -289,7 +289,7 @@
|
|
| 289 |
"attributes": {}
|
| 290 |
}
|
| 291 |
},
|
| 292 |
-
"total_flos":
|
| 293 |
"train_batch_size": 1,
|
| 294 |
"trial_name": null,
|
| 295 |
"trial_params": null
|
|
|
|
| 1 |
{
|
| 2 |
"best_global_step": 242,
|
| 3 |
+
"best_metric": 0.44326454401016235,
|
| 4 |
+
"best_model_checkpoint": "/workspace/gemma4-26b-securecode/checkpoint-242",
|
| 5 |
"epoch": 2.0,
|
| 6 |
"eval_steps": 500,
|
| 7 |
"global_step": 242,
|
|
|
|
| 10 |
"is_world_process_zero": true,
|
| 11 |
"log_history": [
|
| 12 |
{
|
| 13 |
+
"entropy": 1.0907821020111441,
|
| 14 |
"epoch": 0.0827300930713547,
|
| 15 |
+
"grad_norm": 20.875,
|
| 16 |
"learning_rate": 1.8e-05,
|
| 17 |
+
"loss": 80.26775512695312,
|
| 18 |
+
"mean_token_accuracy": 0.4542873948812485,
|
| 19 |
+
"num_tokens": 326185.0,
|
| 20 |
"step": 10
|
| 21 |
},
|
| 22 |
{
|
| 23 |
+
"entropy": 0.8271314173936843,
|
| 24 |
"epoch": 0.1654601861427094,
|
| 25 |
+
"grad_norm": 8.75,
|
| 26 |
"learning_rate": 3.8e-05,
|
| 27 |
+
"loss": 58.08096923828125,
|
| 28 |
+
"mean_token_accuracy": 0.5611657274886965,
|
| 29 |
+
"num_tokens": 653865.0,
|
| 30 |
"step": 20
|
| 31 |
},
|
| 32 |
{
|
| 33 |
+
"entropy": 0.4787554959766567,
|
| 34 |
"epoch": 0.2481902792140641,
|
| 35 |
+
"grad_norm": 1.7109375,
|
| 36 |
"learning_rate": 5.8e-05,
|
| 37 |
+
"loss": 25.493240356445312,
|
| 38 |
+
"mean_token_accuracy": 0.7378443486988544,
|
| 39 |
+
"num_tokens": 981337.0,
|
| 40 |
"step": 30
|
| 41 |
},
|
| 42 |
{
|
| 43 |
+
"entropy": 0.7855595085769892,
|
| 44 |
"epoch": 0.3309203722854188,
|
| 45 |
+
"grad_norm": 0.8671875,
|
| 46 |
"learning_rate": 7.800000000000001e-05,
|
| 47 |
+
"loss": 14.629072570800782,
|
| 48 |
+
"mean_token_accuracy": 0.7917733617126942,
|
| 49 |
+
"num_tokens": 1308584.0,
|
| 50 |
"step": 40
|
| 51 |
},
|
| 52 |
{
|
| 53 |
+
"entropy": 0.7569877350702882,
|
| 54 |
"epoch": 0.4136504653567735,
|
| 55 |
+
"grad_norm": 2.109375,
|
| 56 |
"learning_rate": 9.8e-05,
|
| 57 |
+
"loss": 12.609142303466797,
|
| 58 |
+
"mean_token_accuracy": 0.8013272784650326,
|
| 59 |
+
"num_tokens": 1635098.0,
|
| 60 |
"step": 50
|
| 61 |
},
|
| 62 |
{
|
| 63 |
+
"entropy": 0.6735223602503538,
|
| 64 |
"epoch": 0.4963805584281282,
|
| 65 |
+
"grad_norm": 16.875,
|
| 66 |
"learning_rate": 0.000118,
|
| 67 |
+
"loss": 10.704925537109375,
|
| 68 |
+
"mean_token_accuracy": 0.8209844313561916,
|
| 69 |
+
"num_tokens": 1962302.0,
|
| 70 |
"step": 60
|
| 71 |
},
|
| 72 |
{
|
| 73 |
+
"entropy": 0.6005677949637175,
|
| 74 |
"epoch": 0.5791106514994829,
|
| 75 |
+
"grad_norm": 1.546875,
|
| 76 |
"learning_rate": 0.000138,
|
| 77 |
+
"loss": 9.783185577392578,
|
| 78 |
+
"mean_token_accuracy": 0.8308866504579783,
|
| 79 |
+
"num_tokens": 2289982.0,
|
| 80 |
"step": 70
|
| 81 |
},
|
| 82 |
{
|
| 83 |
+
"entropy": 0.5877057909965515,
|
| 84 |
"epoch": 0.6618407445708376,
|
| 85 |
+
"grad_norm": 11.25,
|
| 86 |
"learning_rate": 0.00015800000000000002,
|
| 87 |
+
"loss": 9.298844909667968,
|
| 88 |
+
"mean_token_accuracy": 0.8359990835189819,
|
| 89 |
+
"num_tokens": 2616786.0,
|
| 90 |
"step": 80
|
| 91 |
},
|
| 92 |
{
|
| 93 |
+
"entropy": 0.5447238819673658,
|
| 94 |
"epoch": 0.7445708376421923,
|
| 95 |
+
"grad_norm": 1.2890625,
|
| 96 |
"learning_rate": 0.00017800000000000002,
|
| 97 |
+
"loss": 8.777264404296876,
|
| 98 |
+
"mean_token_accuracy": 0.8440194871276617,
|
| 99 |
+
"num_tokens": 2941975.0,
|
| 100 |
"step": 90
|
| 101 |
},
|
| 102 |
{
|
| 103 |
+
"entropy": 0.5323287105187774,
|
| 104 |
"epoch": 0.827300930713547,
|
| 105 |
+
"grad_norm": 0.70703125,
|
| 106 |
"learning_rate": 0.00019800000000000002,
|
| 107 |
+
"loss": 8.489185333251953,
|
| 108 |
+
"mean_token_accuracy": 0.8486687760800123,
|
| 109 |
+
"num_tokens": 3269655.0,
|
| 110 |
"step": 100
|
| 111 |
},
|
| 112 |
{
|
| 113 |
+
"entropy": 0.4949887519702315,
|
| 114 |
"epoch": 0.9100310237849017,
|
| 115 |
+
"grad_norm": 0.439453125,
|
| 116 |
"learning_rate": 0.00019942266891397815,
|
| 117 |
+
"loss": 8.192723083496094,
|
| 118 |
+
"mean_token_accuracy": 0.8528529018163681,
|
| 119 |
+
"num_tokens": 3595193.0,
|
| 120 |
"step": 110
|
| 121 |
},
|
| 122 |
{
|
| 123 |
+
"entropy": 0.4980895221233368,
|
| 124 |
"epoch": 0.9927611168562565,
|
| 125 |
+
"grad_norm": 0.921875,
|
| 126 |
"learning_rate": 0.00019743551343638324,
|
| 127 |
+
"loss": 7.908926391601563,
|
| 128 |
+
"mean_token_accuracy": 0.8567473825067282,
|
| 129 |
+
"num_tokens": 3922475.0,
|
| 130 |
"step": 120
|
| 131 |
},
|
| 132 |
{
|
| 133 |
"epoch": 1.0,
|
| 134 |
+
"eval_entropy": 0.5629051625728607,
|
| 135 |
+
"eval_loss": 0.4985087513923645,
|
| 136 |
+
"eval_mean_token_accuracy": 0.8571729124978531,
|
| 137 |
+
"eval_num_tokens": 3949618.0,
|
| 138 |
+
"eval_runtime": 122.3216,
|
| 139 |
+
"eval_samples_per_second": 1.758,
|
| 140 |
+
"eval_steps_per_second": 1.758,
|
| 141 |
"step": 121
|
| 142 |
},
|
| 143 |
{
|
| 144 |
+
"entropy": 0.5185692432937743,
|
| 145 |
"epoch": 1.0744570837642193,
|
| 146 |
+
"grad_norm": 0.37890625,
|
| 147 |
"learning_rate": 0.00019405971991583108,
|
| 148 |
+
"loss": 7.807546997070313,
|
| 149 |
+
"mean_token_accuracy": 0.8577670869947989,
|
| 150 |
+
"num_tokens": 4244530.0,
|
| 151 |
"step": 130
|
| 152 |
},
|
| 153 |
{
|
| 154 |
+
"entropy": 0.4555334035307169,
|
| 155 |
"epoch": 1.157187176835574,
|
| 156 |
+
"grad_norm": 0.1953125,
|
| 157 |
"learning_rate": 0.00018934339971482674,
|
| 158 |
+
"loss": 7.464869689941406,
|
| 159 |
+
"mean_token_accuracy": 0.8638800706714391,
|
| 160 |
+
"num_tokens": 4572210.0,
|
| 161 |
"step": 140
|
| 162 |
},
|
| 163 |
{
|
| 164 |
+
"entropy": 0.47754106651991607,
|
| 165 |
"epoch": 1.2399172699069285,
|
| 166 |
+
"grad_norm": 1.21875,
|
| 167 |
"learning_rate": 0.00018335376920472097,
|
| 168 |
+
"loss": 7.764054870605468,
|
| 169 |
+
"mean_token_accuracy": 0.8579531148076057,
|
| 170 |
+
"num_tokens": 4897694.0,
|
| 171 |
"step": 150
|
| 172 |
},
|
| 173 |
{
|
| 174 |
+
"entropy": 0.4550897226668894,
|
| 175 |
"epoch": 1.3226473629782833,
|
| 176 |
+
"grad_norm": 0.28515625,
|
| 177 |
"learning_rate": 0.00017617619180688085,
|
| 178 |
+
"loss": 7.322466278076172,
|
| 179 |
+
"mean_token_accuracy": 0.8654993120580912,
|
| 180 |
+
"num_tokens": 5223716.0,
|
| 181 |
"step": 160
|
| 182 |
},
|
| 183 |
{
|
| 184 |
+
"entropy": 0.4635292864404619,
|
| 185 |
"epoch": 1.4053774560496382,
|
| 186 |
+
"grad_norm": 0.8203125,
|
| 187 |
"learning_rate": 0.00016791296140450545,
|
| 188 |
+
"loss": 7.322091674804687,
|
| 189 |
+
"mean_token_accuracy": 0.8659177150577306,
|
| 190 |
+
"num_tokens": 5550963.0,
|
| 191 |
"step": 170
|
| 192 |
},
|
| 193 |
{
|
| 194 |
+
"entropy": 0.46733071468770504,
|
| 195 |
"epoch": 1.4881075491209927,
|
| 196 |
+
"grad_norm": 0.45703125,
|
| 197 |
"learning_rate": 0.0001586818444637402,
|
| 198 |
+
"loss": 7.3169700622558596,
|
| 199 |
+
"mean_token_accuracy": 0.8659562785178423,
|
| 200 |
+
"num_tokens": 5878643.0,
|
| 201 |
"step": 180
|
| 202 |
},
|
| 203 |
{
|
| 204 |
+
"entropy": 0.4562882795929909,
|
| 205 |
"epoch": 1.5708376421923473,
|
| 206 |
+
"grad_norm": 0.302734375,
|
| 207 |
"learning_rate": 0.0001486144016415862,
|
| 208 |
+
"loss": 7.291434478759766,
|
| 209 |
+
"mean_token_accuracy": 0.8656417932361364,
|
| 210 |
+
"num_tokens": 6206323.0,
|
| 211 |
"step": 190
|
| 212 |
},
|
| 213 |
{
|
| 214 |
+
"entropy": 0.4341404400765896,
|
| 215 |
"epoch": 1.6535677352637022,
|
| 216 |
+
"grad_norm": 0.30859375,
|
| 217 |
"learning_rate": 0.00013785411280082746,
|
| 218 |
+
"loss": 6.904853057861328,
|
| 219 |
+
"mean_token_accuracy": 0.8713251128792763,
|
| 220 |
+
"num_tokens": 6533527.0,
|
| 221 |
"step": 200
|
| 222 |
},
|
| 223 |
{
|
| 224 |
+
"entropy": 0.4494163889437914,
|
| 225 |
"epoch": 1.736297828335057,
|
| 226 |
+
"grad_norm": 0.2177734375,
|
| 227 |
"learning_rate": 0.00012655433215401438,
|
| 228 |
+
"loss": 7.195234680175782,
|
| 229 |
+
"mean_token_accuracy": 0.8673424527049065,
|
| 230 |
+
"num_tokens": 6861207.0,
|
| 231 |
"step": 210
|
| 232 |
},
|
| 233 |
{
|
| 234 |
+
"entropy": 0.46514057284221055,
|
| 235 |
"epoch": 1.8190279214064116,
|
| 236 |
+
"grad_norm": 0.220703125,
|
| 237 |
"learning_rate": 0.00011487610267952142,
|
| 238 |
+
"loss": 7.431344604492187,
|
| 239 |
+
"mean_token_accuracy": 0.8633232209831476,
|
| 240 |
+
"num_tokens": 7188281.0,
|
| 241 |
"step": 220
|
| 242 |
},
|
| 243 |
{
|
| 244 |
+
"entropy": 0.43800092255696654,
|
| 245 |
"epoch": 1.9017580144777662,
|
| 246 |
+
"grad_norm": 0.1962890625,
|
| 247 |
"learning_rate": 0.00010298586095833151,
|
| 248 |
+
"loss": 7.023017883300781,
|
| 249 |
+
"mean_token_accuracy": 0.8693898901343345,
|
| 250 |
+
"num_tokens": 7513788.0,
|
| 251 |
"step": 230
|
| 252 |
},
|
| 253 |
{
|
| 254 |
+
"entropy": 0.44280060222372414,
|
| 255 |
"epoch": 1.984488107549121,
|
| 256 |
+
"grad_norm": 0.453125,
|
| 257 |
"learning_rate": 9.10530651419099e-05,
|
| 258 |
+
"loss": 7.070135498046875,
|
| 259 |
+
"mean_token_accuracy": 0.8684426795691251,
|
| 260 |
+
"num_tokens": 7839097.0,
|
| 261 |
"step": 240
|
| 262 |
},
|
| 263 |
{
|
| 264 |
"epoch": 2.0,
|
| 265 |
+
"eval_entropy": 0.43673129948072653,
|
| 266 |
+
"eval_loss": 0.44326454401016235,
|
| 267 |
+
"eval_mean_token_accuracy": 0.8689799866010977,
|
| 268 |
+
"eval_num_tokens": 7899236.0,
|
| 269 |
+
"eval_runtime": 122.7611,
|
| 270 |
+
"eval_samples_per_second": 1.751,
|
| 271 |
+
"eval_steps_per_second": 1.751,
|
| 272 |
"step": 242
|
| 273 |
}
|
| 274 |
],
|
|
|
|
| 289 |
"attributes": {}
|
| 290 |
}
|
| 291 |
},
|
| 292 |
+
"total_flos": 1.1889766655832988e+18,
|
| 293 |
"train_batch_size": 1,
|
| 294 |
"trial_name": null,
|
| 295 |
"trial_params": null
|
checkpoint-242/training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5713
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5bdf412780aad6b6bc055248dd1640e0d1a2282e1c11f28390eac7fae5fae303
|
| 3 |
size 5713
|
checkpoint-363/adapter_config.json
CHANGED
|
@@ -24,217 +24,217 @@
|
|
| 24 |
"megatron_core": "megatron.core",
|
| 25 |
"modules_to_save": null,
|
| 26 |
"peft_type": "LORA",
|
| 27 |
-
"peft_version": "0.18.2.dev0@
|
| 28 |
"qalora_group_size": 16,
|
| 29 |
"r": 16,
|
| 30 |
"rank_pattern": {},
|
| 31 |
"revision": null,
|
| 32 |
"target_modules": [
|
| 33 |
-
"model.language_model.layers.
|
| 34 |
-
"model.language_model.layers.
|
| 35 |
-
"model.language_model.layers.3.mlp.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
"model.language_model.layers.17.mlp.up_proj",
|
| 37 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
"model.language_model.layers.27.self_attn.k_proj",
|
| 39 |
-
"model.language_model.layers.28.mlp.down_proj",
|
| 40 |
-
"model.language_model.layers.6.mlp.up_proj",
|
| 41 |
-
"model.language_model.layers.24.self_attn.k_proj",
|
| 42 |
-
"model.language_model.layers.6.self_attn.q_proj",
|
| 43 |
-
"model.language_model.layers.17.self_attn.q_proj",
|
| 44 |
-
"model.language_model.layers.15.self_attn.k_proj",
|
| 45 |
-
"model.language_model.layers.24.mlp.up_proj",
|
| 46 |
-
"model.language_model.layers.19.mlp.gate_proj",
|
| 47 |
-
"model.language_model.layers.16.self_attn.k_proj",
|
| 48 |
-
"model.language_model.layers.26.self_attn.q_proj",
|
| 49 |
-
"model.language_model.layers.21.mlp.up_proj",
|
| 50 |
-
"model.language_model.layers.17.mlp.down_proj",
|
| 51 |
-
"model.language_model.layers.10.self_attn.v_proj",
|
| 52 |
-
"model.language_model.layers.25.mlp.down_proj",
|
| 53 |
-
"model.language_model.layers.11.mlp.up_proj",
|
| 54 |
-
"model.language_model.layers.2.self_attn.o_proj",
|
| 55 |
-
"model.language_model.layers.15.mlp.down_proj",
|
| 56 |
-
"model.language_model.layers.10.self_attn.k_proj",
|
| 57 |
-
"model.language_model.layers.15.self_attn.q_proj",
|
| 58 |
-
"model.language_model.layers.9.self_attn.v_proj",
|
| 59 |
-
"model.language_model.layers.27.self_attn.o_proj",
|
| 60 |
-
"model.language_model.layers.3.self_attn.v_proj",
|
| 61 |
-
"model.language_model.layers.10.self_attn.q_proj",
|
| 62 |
-
"model.language_model.layers.21.mlp.gate_proj",
|
| 63 |
-
"model.language_model.layers.25.self_attn.q_proj",
|
| 64 |
"model.language_model.layers.5.self_attn.o_proj",
|
|
|
|
|
|
|
| 65 |
"model.language_model.layers.2.mlp.gate_proj",
|
| 66 |
-
"model.language_model.layers.9.mlp.gate_proj",
|
| 67 |
-
"model.language_model.layers.19.self_attn.v_proj",
|
| 68 |
-
"model.language_model.layers.18.self_attn.k_proj",
|
| 69 |
-
"model.language_model.layers.19.mlp.down_proj",
|
| 70 |
-
"model.language_model.layers.23.self_attn.o_proj",
|
| 71 |
-
"model.language_model.layers.27.mlp.gate_proj",
|
| 72 |
-
"model.language_model.layers.0.mlp.up_proj",
|
| 73 |
-
"model.language_model.layers.20.mlp.gate_proj",
|
| 74 |
-
"model.language_model.layers.28.self_attn.o_proj",
|
| 75 |
-
"model.language_model.layers.4.self_attn.o_proj",
|
| 76 |
-
"model.language_model.layers.28.self_attn.v_proj",
|
| 77 |
-
"model.language_model.layers.11.self_attn.q_proj",
|
| 78 |
"model.language_model.layers.26.self_attn.o_proj",
|
| 79 |
-
"model.language_model.layers.9.mlp.down_proj",
|
| 80 |
-
"model.language_model.layers.27.self_attn.v_proj",
|
| 81 |
-
"model.language_model.layers.23.mlp.up_proj",
|
| 82 |
-
"model.language_model.layers.2.mlp.up_proj",
|
| 83 |
-
"model.language_model.layers.0.mlp.gate_proj",
|
| 84 |
-
"model.language_model.layers.18.self_attn.o_proj",
|
| 85 |
-
"model.language_model.layers.19.self_attn.k_proj",
|
| 86 |
-
"model.language_model.layers.10.mlp.down_proj",
|
| 87 |
-
"model.language_model.layers.10.mlp.gate_proj",
|
| 88 |
-
"model.language_model.layers.0.self_attn.o_proj",
|
| 89 |
-
"model.language_model.layers.20.mlp.down_proj",
|
| 90 |
-
"model.language_model.layers.10.self_attn.o_proj",
|
| 91 |
-
"model.language_model.layers.15.self_attn.o_proj",
|
| 92 |
-
"model.language_model.layers.18.mlp.down_proj",
|
| 93 |
-
"model.language_model.layers.1.self_attn.v_proj",
|
| 94 |
-
"model.language_model.layers.13.self_attn.q_proj",
|
| 95 |
-
"model.language_model.layers.18.self_attn.q_proj",
|
| 96 |
-
"model.language_model.layers.3.mlp.down_proj",
|
| 97 |
-
"model.language_model.layers.20.self_attn.k_proj",
|
| 98 |
-
"model.language_model.layers.14.self_attn.o_proj",
|
| 99 |
-
"model.language_model.layers.7.mlp.down_proj",
|
| 100 |
-
"model.language_model.layers.25.self_attn.v_proj",
|
| 101 |
-
"model.language_model.layers.29.mlp.gate_proj",
|
| 102 |
-
"model.language_model.layers.2.self_attn.k_proj",
|
| 103 |
-
"model.language_model.layers.5.self_attn.k_proj",
|
| 104 |
-
"model.language_model.layers.9.self_attn.k_proj",
|
| 105 |
-
"model.language_model.layers.1.mlp.gate_proj",
|
| 106 |
-
"model.language_model.layers.8.self_attn.o_proj",
|
| 107 |
-
"model.language_model.layers.22.self_attn.k_proj",
|
| 108 |
-
"model.language_model.layers.3.self_attn.q_proj",
|
| 109 |
-
"model.language_model.layers.23.self_attn.k_proj",
|
| 110 |
-
"model.language_model.layers.3.self_attn.k_proj",
|
| 111 |
-
"model.language_model.layers.19.self_attn.q_proj",
|
| 112 |
-
"model.language_model.layers.18.self_attn.v_proj",
|
| 113 |
-
"model.language_model.layers.10.mlp.up_proj",
|
| 114 |
-
"model.language_model.layers.11.mlp.gate_proj",
|
| 115 |
-
"model.language_model.layers.1.mlp.up_proj",
|
| 116 |
-
"model.language_model.layers.18.mlp.gate_proj",
|
| 117 |
-
"model.language_model.layers.8.mlp.gate_proj",
|
| 118 |
"model.language_model.layers.7.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119 |
"model.language_model.layers.8.mlp.up_proj",
|
| 120 |
-
"model.language_model.layers.
|
| 121 |
-
"model.language_model.layers.14.self_attn.k_proj",
|
| 122 |
-
"model.language_model.layers.22.self_attn.q_proj",
|
| 123 |
-
"model.language_model.layers.4.mlp.down_proj",
|
| 124 |
-
"model.language_model.layers.22.mlp.gate_proj",
|
| 125 |
-
"model.language_model.layers.15.self_attn.v_proj",
|
| 126 |
-
"model.language_model.layers.21.self_attn.o_proj",
|
| 127 |
-
"model.language_model.layers.11.self_attn.o_proj",
|
| 128 |
-
"model.language_model.layers.20.mlp.up_proj",
|
| 129 |
-
"model.language_model.layers.16.self_attn.q_proj",
|
| 130 |
-
"model.language_model.layers.1.self_attn.k_proj",
|
| 131 |
-
"model.language_model.layers.24.mlp.gate_proj",
|
| 132 |
-
"model.language_model.layers.26.mlp.gate_proj",
|
| 133 |
"model.language_model.layers.2.self_attn.q_proj",
|
| 134 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
"model.language_model.layers.7.self_attn.q_proj",
|
| 136 |
-
"model.language_model.layers.
|
| 137 |
-
"model.language_model.layers.27.self_attn.q_proj",
|
| 138 |
-
"model.language_model.layers.29.mlp.up_proj",
|
| 139 |
-
"model.language_model.layers.28.self_attn.k_proj",
|
| 140 |
-
"model.language_model.layers.24.self_attn.o_proj",
|
| 141 |
-
"model.language_model.layers.26.self_attn.k_proj",
|
| 142 |
-
"model.language_model.layers.21.mlp.down_proj",
|
| 143 |
-
"model.language_model.layers.14.mlp.gate_proj",
|
| 144 |
-
"model.language_model.layers.25.mlp.up_proj",
|
| 145 |
"model.language_model.layers.27.mlp.down_proj",
|
| 146 |
-
"model.language_model.layers.20.self_attn.v_proj",
|
| 147 |
-
"model.language_model.layers.0.mlp.down_proj",
|
| 148 |
-
"model.language_model.layers.6.self_attn.v_proj",
|
| 149 |
-
"model.language_model.layers.4.self_attn.q_proj",
|
| 150 |
-
"model.language_model.layers.9.self_attn.q_proj",
|
| 151 |
-
"model.language_model.layers.0.self_attn.q_proj",
|
| 152 |
-
"model.language_model.layers.27.mlp.up_proj",
|
| 153 |
-
"model.language_model.layers.29.self_attn.k_proj",
|
| 154 |
-
"model.language_model.layers.29.self_attn.q_proj",
|
| 155 |
-
"model.language_model.layers.12.mlp.up_proj",
|
| 156 |
-
"model.language_model.layers.6.mlp.down_proj",
|
| 157 |
"model.language_model.layers.2.mlp.down_proj",
|
|
|
|
| 158 |
"model.language_model.layers.6.mlp.gate_proj",
|
| 159 |
-
"model.language_model.layers.24.self_attn.v_proj",
|
| 160 |
-
"model.language_model.layers.4.mlp.up_proj",
|
| 161 |
"model.language_model.layers.9.self_attn.o_proj",
|
| 162 |
-
"model.language_model.layers.
|
| 163 |
-
"model.language_model.layers.
|
| 164 |
-
"model.language_model.layers.
|
| 165 |
-
"model.language_model.layers.
|
| 166 |
-
"model.language_model.layers.
|
| 167 |
-
"model.language_model.layers.
|
| 168 |
-
"model.language_model.layers.
|
| 169 |
-
"model.language_model.layers.
|
| 170 |
-
"model.language_model.layers.21.self_attn.q_proj",
|
| 171 |
-
"model.language_model.layers.15.mlp.up_proj",
|
| 172 |
-
"model.language_model.layers.26.mlp.up_proj",
|
| 173 |
-
"model.language_model.layers.26.mlp.down_proj",
|
| 174 |
-
"model.language_model.layers.25.self_attn.o_proj",
|
| 175 |
"model.language_model.layers.8.self_attn.v_proj",
|
| 176 |
-
"model.language_model.layers.
|
| 177 |
-
"model.language_model.layers.
|
| 178 |
-
"model.language_model.layers.
|
| 179 |
-
"model.language_model.layers.
|
| 180 |
-
"model.language_model.layers.
|
| 181 |
-
"model.language_model.layers.
|
| 182 |
"model.language_model.layers.3.mlp.gate_proj",
|
| 183 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 184 |
"model.language_model.layers.9.mlp.up_proj",
|
| 185 |
-
"model.language_model.layers.
|
| 186 |
-
"model.language_model.layers.
|
| 187 |
-
"model.language_model.layers.
|
| 188 |
-
"model.language_model.layers.16.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 189 |
"model.language_model.layers.8.self_attn.k_proj",
|
| 190 |
-
"model.language_model.layers.12.
|
| 191 |
-
"model.language_model.layers.7.self_attn.o_proj",
|
| 192 |
-
"model.language_model.layers.18.mlp.up_proj",
|
| 193 |
-
"model.language_model.layers.13.mlp.up_proj",
|
| 194 |
-
"model.language_model.layers.16.mlp.up_proj",
|
| 195 |
-
"model.language_model.layers.17.self_attn.k_proj",
|
| 196 |
-
"model.language_model.layers.25.self_attn.k_proj",
|
| 197 |
-
"model.language_model.layers.8.self_attn.q_proj",
|
| 198 |
"model.language_model.layers.4.self_attn.v_proj",
|
| 199 |
-
"model.language_model.layers.
|
| 200 |
-
"model.language_model.layers.
|
| 201 |
-
"model.language_model.layers.
|
| 202 |
-
"model.language_model.layers.13.self_attn.k_proj",
|
| 203 |
-
"model.language_model.layers.7.self_attn.k_proj",
|
| 204 |
-
"model.language_model.layers.22.self_attn.o_proj",
|
| 205 |
-
"model.language_model.layers.22.mlp.up_proj",
|
| 206 |
-
"model.language_model.layers.16.self_attn.o_proj",
|
| 207 |
-
"model.language_model.layers.24.self_attn.q_proj",
|
| 208 |
-
"model.language_model.layers.12.self_attn.q_proj",
|
| 209 |
"model.language_model.layers.2.self_attn.v_proj",
|
| 210 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 211 |
"model.language_model.layers.13.mlp.gate_proj",
|
| 212 |
-
"model.language_model.layers.
|
| 213 |
-
"model.language_model.layers.
|
| 214 |
-
"model.language_model.layers.26.self_attn.v_proj",
|
| 215 |
-
"model.language_model.layers.28.mlp.up_proj",
|
| 216 |
-
"model.language_model.layers.19.mlp.up_proj",
|
| 217 |
-
"model.language_model.layers.16.mlp.gate_proj",
|
| 218 |
-
"model.language_model.layers.7.self_attn.v_proj",
|
| 219 |
-
"model.language_model.layers.25.mlp.gate_proj",
|
| 220 |
-
"model.language_model.layers.13.self_attn.v_proj",
|
| 221 |
"model.language_model.layers.20.self_attn.q_proj",
|
| 222 |
-
"model.language_model.layers.
|
| 223 |
-
"model.language_model.layers.
|
| 224 |
-
"model.language_model.layers.11.mlp.down_proj",
|
| 225 |
-
"model.language_model.layers.0.self_attn.k_proj",
|
| 226 |
-
"model.language_model.layers.21.self_attn.v_proj",
|
| 227 |
-
"model.language_model.layers.28.self_attn.q_proj",
|
| 228 |
"model.language_model.layers.29.self_attn.o_proj",
|
| 229 |
"model.language_model.layers.11.self_attn.k_proj",
|
| 230 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 231 |
"model.language_model.layers.7.mlp.up_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 232 |
"model.language_model.layers.22.mlp.down_proj",
|
|
|
|
|
|
|
| 233 |
"model.language_model.layers.20.self_attn.o_proj",
|
| 234 |
-
"model.language_model.layers.
|
| 235 |
-
"model.language_model.layers.
|
| 236 |
-
"model.language_model.layers.
|
| 237 |
-
"model.language_model.layers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 238 |
],
|
| 239 |
"target_parameters": null,
|
| 240 |
"task_type": "CAUSAL_LM",
|
|
|
|
| 24 |
"megatron_core": "megatron.core",
|
| 25 |
"modules_to_save": null,
|
| 26 |
"peft_type": "LORA",
|
| 27 |
+
"peft_version": "0.18.2.dev0@7a4b07f2070162972f8c0515bc3acd19f81c0ad7",
|
| 28 |
"qalora_group_size": 16,
|
| 29 |
"r": 16,
|
| 30 |
"rank_pattern": {},
|
| 31 |
"revision": null,
|
| 32 |
"target_modules": [
|
| 33 |
+
"model.language_model.layers.17.self_attn.q_proj",
|
| 34 |
+
"model.language_model.layers.7.self_attn.o_proj",
|
| 35 |
+
"model.language_model.layers.3.mlp.down_proj",
|
| 36 |
+
"model.language_model.layers.14.mlp.up_proj",
|
| 37 |
+
"model.language_model.layers.17.self_attn.k_proj",
|
| 38 |
+
"model.language_model.layers.25.self_attn.o_proj",
|
| 39 |
+
"model.language_model.layers.6.self_attn.q_proj",
|
| 40 |
+
"model.language_model.layers.5.self_attn.q_proj",
|
| 41 |
+
"model.language_model.layers.1.mlp.gate_proj",
|
| 42 |
"model.language_model.layers.17.mlp.up_proj",
|
| 43 |
+
"model.language_model.layers.5.self_attn.k_proj",
|
| 44 |
+
"model.language_model.layers.16.self_attn.o_proj",
|
| 45 |
+
"model.language_model.layers.18.mlp.up_proj",
|
| 46 |
+
"model.language_model.layers.25.self_attn.k_proj",
|
| 47 |
+
"model.language_model.layers.23.mlp.down_proj",
|
| 48 |
+
"model.language_model.layers.27.mlp.up_proj",
|
| 49 |
"model.language_model.layers.27.self_attn.k_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
"model.language_model.layers.5.self_attn.o_proj",
|
| 51 |
+
"model.language_model.layers.22.self_attn.k_proj",
|
| 52 |
+
"model.language_model.layers.1.mlp.down_proj",
|
| 53 |
"model.language_model.layers.2.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
"model.language_model.layers.26.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
"model.language_model.layers.7.mlp.gate_proj",
|
| 56 |
+
"model.language_model.layers.24.self_attn.q_proj",
|
| 57 |
+
"model.language_model.layers.3.self_attn.o_proj",
|
| 58 |
+
"model.language_model.layers.0.self_attn.q_proj",
|
| 59 |
+
"model.language_model.layers.21.self_attn.k_proj",
|
| 60 |
+
"model.language_model.layers.23.self_attn.o_proj",
|
| 61 |
+
"model.language_model.layers.9.self_attn.q_proj",
|
| 62 |
+
"model.language_model.layers.5.mlp.gate_proj",
|
| 63 |
+
"model.language_model.layers.10.self_attn.v_proj",
|
| 64 |
"model.language_model.layers.8.mlp.up_proj",
|
| 65 |
+
"model.language_model.layers.26.self_attn.v_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
"model.language_model.layers.2.self_attn.q_proj",
|
| 67 |
+
"model.language_model.layers.13.self_attn.o_proj",
|
| 68 |
+
"model.language_model.layers.7.mlp.down_proj",
|
| 69 |
+
"model.language_model.layers.24.mlp.down_proj",
|
| 70 |
+
"model.language_model.layers.6.self_attn.k_proj",
|
| 71 |
+
"model.language_model.layers.0.self_attn.k_proj",
|
| 72 |
+
"model.language_model.layers.1.mlp.up_proj",
|
| 73 |
+
"model.language_model.layers.28.mlp.down_proj",
|
| 74 |
+
"model.language_model.layers.2.self_attn.k_proj",
|
| 75 |
+
"model.language_model.layers.22.mlp.up_proj",
|
| 76 |
"model.language_model.layers.7.self_attn.q_proj",
|
| 77 |
+
"model.language_model.layers.22.self_attn.q_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
"model.language_model.layers.27.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
"model.language_model.layers.2.mlp.down_proj",
|
| 80 |
+
"model.language_model.layers.19.mlp.down_proj",
|
| 81 |
"model.language_model.layers.6.mlp.gate_proj",
|
|
|
|
|
|
|
| 82 |
"model.language_model.layers.9.self_attn.o_proj",
|
| 83 |
+
"model.language_model.layers.15.mlp.down_proj",
|
| 84 |
+
"model.language_model.layers.4.self_attn.o_proj",
|
| 85 |
+
"model.language_model.layers.29.self_attn.k_proj",
|
| 86 |
+
"model.language_model.layers.18.self_attn.q_proj",
|
| 87 |
+
"model.language_model.layers.11.mlp.down_proj",
|
| 88 |
+
"model.language_model.layers.26.mlp.gate_proj",
|
| 89 |
+
"model.language_model.layers.23.mlp.up_proj",
|
| 90 |
+
"model.language_model.layers.0.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
"model.language_model.layers.8.self_attn.v_proj",
|
| 92 |
+
"model.language_model.layers.14.self_attn.k_proj",
|
| 93 |
+
"model.language_model.layers.21.mlp.up_proj",
|
| 94 |
+
"model.language_model.layers.10.self_attn.o_proj",
|
| 95 |
+
"model.language_model.layers.24.mlp.gate_proj",
|
| 96 |
+
"model.language_model.layers.28.mlp.up_proj",
|
| 97 |
+
"model.language_model.layers.29.mlp.down_proj",
|
| 98 |
"model.language_model.layers.3.mlp.gate_proj",
|
| 99 |
+
"model.language_model.layers.8.mlp.down_proj",
|
| 100 |
+
"model.language_model.layers.9.mlp.down_proj",
|
| 101 |
+
"model.language_model.layers.18.mlp.down_proj",
|
| 102 |
+
"model.language_model.layers.19.mlp.gate_proj",
|
| 103 |
+
"model.language_model.layers.26.mlp.down_proj",
|
| 104 |
+
"model.language_model.layers.9.self_attn.v_proj",
|
| 105 |
"model.language_model.layers.9.mlp.up_proj",
|
| 106 |
+
"model.language_model.layers.10.self_attn.q_proj",
|
| 107 |
+
"model.language_model.layers.11.self_attn.q_proj",
|
| 108 |
+
"model.language_model.layers.18.mlp.gate_proj",
|
| 109 |
+
"model.language_model.layers.16.self_attn.v_proj",
|
| 110 |
+
"model.language_model.layers.1.self_attn.k_proj",
|
| 111 |
+
"model.language_model.layers.25.mlp.up_proj",
|
| 112 |
+
"model.language_model.layers.28.self_attn.v_proj",
|
| 113 |
+
"model.language_model.layers.15.mlp.gate_proj",
|
| 114 |
+
"model.language_model.layers.9.self_attn.k_proj",
|
| 115 |
+
"model.language_model.layers.27.mlp.gate_proj",
|
| 116 |
+
"model.language_model.layers.14.self_attn.o_proj",
|
| 117 |
+
"model.language_model.layers.22.mlp.gate_proj",
|
| 118 |
+
"model.language_model.layers.14.mlp.down_proj",
|
| 119 |
"model.language_model.layers.8.self_attn.k_proj",
|
| 120 |
+
"model.language_model.layers.12.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
"model.language_model.layers.4.self_attn.v_proj",
|
| 122 |
+
"model.language_model.layers.10.mlp.down_proj",
|
| 123 |
+
"model.language_model.layers.24.mlp.up_proj",
|
| 124 |
+
"model.language_model.layers.25.mlp.gate_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
"model.language_model.layers.2.self_attn.v_proj",
|
| 126 |
+
"model.language_model.layers.4.self_attn.k_proj",
|
| 127 |
+
"model.language_model.layers.8.self_attn.q_proj",
|
| 128 |
+
"model.language_model.layers.18.self_attn.v_proj",
|
| 129 |
+
"model.language_model.layers.27.self_attn.o_proj",
|
| 130 |
+
"model.language_model.layers.16.self_attn.q_proj",
|
| 131 |
+
"model.language_model.layers.3.mlp.up_proj",
|
| 132 |
"model.language_model.layers.13.mlp.gate_proj",
|
| 133 |
+
"model.language_model.layers.17.mlp.down_proj",
|
| 134 |
+
"model.language_model.layers.28.self_attn.o_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
"model.language_model.layers.20.self_attn.q_proj",
|
| 136 |
+
"model.language_model.layers.0.mlp.up_proj",
|
| 137 |
+
"model.language_model.layers.16.mlp.down_proj",
|
|
|
|
|
|
|
|
|
|
|
|
|
| 138 |
"model.language_model.layers.29.self_attn.o_proj",
|
| 139 |
"model.language_model.layers.11.self_attn.k_proj",
|
| 140 |
+
"model.language_model.layers.20.self_attn.v_proj",
|
| 141 |
+
"model.language_model.layers.14.self_attn.v_proj",
|
| 142 |
+
"model.language_model.layers.11.mlp.gate_proj",
|
| 143 |
+
"model.language_model.layers.21.mlp.down_proj",
|
| 144 |
+
"model.language_model.layers.12.mlp.up_proj",
|
| 145 |
+
"model.language_model.layers.10.mlp.gate_proj",
|
| 146 |
+
"model.language_model.layers.10.self_attn.k_proj",
|
| 147 |
+
"model.language_model.layers.27.self_attn.q_proj",
|
| 148 |
+
"model.language_model.layers.8.mlp.gate_proj",
|
| 149 |
+
"model.language_model.layers.19.self_attn.q_proj",
|
| 150 |
+
"model.language_model.layers.23.self_attn.k_proj",
|
| 151 |
+
"model.language_model.layers.13.self_attn.q_proj",
|
| 152 |
+
"model.language_model.layers.0.self_attn.v_proj",
|
| 153 |
+
"model.language_model.layers.8.self_attn.o_proj",
|
| 154 |
+
"model.language_model.layers.0.mlp.gate_proj",
|
| 155 |
+
"model.language_model.layers.17.mlp.gate_proj",
|
| 156 |
+
"model.language_model.layers.1.self_attn.o_proj",
|
| 157 |
+
"model.language_model.layers.14.self_attn.q_proj",
|
| 158 |
+
"model.language_model.layers.14.mlp.gate_proj",
|
| 159 |
+
"model.language_model.layers.12.mlp.down_proj",
|
| 160 |
+
"model.language_model.layers.21.self_attn.o_proj",
|
| 161 |
+
"model.language_model.layers.5.mlp.up_proj",
|
| 162 |
+
"model.language_model.layers.20.mlp.up_proj",
|
| 163 |
+
"model.language_model.layers.13.mlp.up_proj",
|
| 164 |
+
"model.language_model.layers.18.self_attn.k_proj",
|
| 165 |
+
"model.language_model.layers.23.mlp.gate_proj",
|
| 166 |
+
"model.language_model.layers.4.mlp.down_proj",
|
| 167 |
+
"model.language_model.layers.24.self_attn.o_proj",
|
| 168 |
+
"model.language_model.layers.28.self_attn.k_proj",
|
| 169 |
+
"model.language_model.layers.13.self_attn.v_proj",
|
| 170 |
+
"model.language_model.layers.6.mlp.down_proj",
|
| 171 |
+
"model.language_model.layers.13.mlp.down_proj",
|
| 172 |
+
"model.language_model.layers.21.self_attn.q_proj",
|
| 173 |
+
"model.language_model.layers.10.mlp.up_proj",
|
| 174 |
+
"model.language_model.layers.15.self_attn.v_proj",
|
| 175 |
+
"model.language_model.layers.0.self_attn.o_proj",
|
| 176 |
+
"model.language_model.layers.9.mlp.gate_proj",
|
| 177 |
+
"model.language_model.layers.16.mlp.up_proj",
|
| 178 |
+
"model.language_model.layers.11.self_attn.o_proj",
|
| 179 |
+
"model.language_model.layers.17.self_attn.o_proj",
|
| 180 |
+
"model.language_model.layers.20.mlp.gate_proj",
|
| 181 |
+
"model.language_model.layers.26.mlp.up_proj",
|
| 182 |
+
"model.language_model.layers.15.mlp.up_proj",
|
| 183 |
+
"model.language_model.layers.12.mlp.gate_proj",
|
| 184 |
+
"model.language_model.layers.22.self_attn.o_proj",
|
| 185 |
+
"model.language_model.layers.28.mlp.gate_proj",
|
| 186 |
+
"model.language_model.layers.21.mlp.gate_proj",
|
| 187 |
+
"model.language_model.layers.2.mlp.up_proj",
|
| 188 |
+
"model.language_model.layers.28.self_attn.q_proj",
|
| 189 |
+
"model.language_model.layers.29.self_attn.q_proj",
|
| 190 |
"model.language_model.layers.7.mlp.up_proj",
|
| 191 |
+
"model.language_model.layers.15.self_attn.q_proj",
|
| 192 |
+
"model.language_model.layers.19.self_attn.k_proj",
|
| 193 |
+
"model.language_model.layers.7.self_attn.v_proj",
|
| 194 |
+
"model.language_model.layers.29.mlp.gate_proj",
|
| 195 |
+
"model.language_model.layers.24.self_attn.k_proj",
|
| 196 |
+
"model.language_model.layers.16.mlp.gate_proj",
|
| 197 |
+
"model.language_model.layers.12.self_attn.k_proj",
|
| 198 |
+
"model.language_model.layers.4.mlp.up_proj",
|
| 199 |
+
"model.language_model.layers.20.mlp.down_proj",
|
| 200 |
+
"model.language_model.layers.5.mlp.down_proj",
|
| 201 |
"model.language_model.layers.22.mlp.down_proj",
|
| 202 |
+
"model.language_model.layers.3.self_attn.q_proj",
|
| 203 |
+
"model.language_model.layers.26.self_attn.k_proj",
|
| 204 |
"model.language_model.layers.20.self_attn.o_proj",
|
| 205 |
+
"model.language_model.layers.24.self_attn.v_proj",
|
| 206 |
+
"model.language_model.layers.21.self_attn.v_proj",
|
| 207 |
+
"model.language_model.layers.19.self_attn.o_proj",
|
| 208 |
+
"model.language_model.layers.29.mlp.up_proj",
|
| 209 |
+
"model.language_model.layers.13.self_attn.k_proj",
|
| 210 |
+
"model.language_model.layers.2.self_attn.o_proj",
|
| 211 |
+
"model.language_model.layers.16.self_attn.k_proj",
|
| 212 |
+
"model.language_model.layers.22.self_attn.v_proj",
|
| 213 |
+
"model.language_model.layers.25.self_attn.v_proj",
|
| 214 |
+
"model.language_model.layers.25.mlp.down_proj",
|
| 215 |
+
"model.language_model.layers.4.mlp.gate_proj",
|
| 216 |
+
"model.language_model.layers.6.self_attn.o_proj",
|
| 217 |
+
"model.language_model.layers.25.self_attn.q_proj",
|
| 218 |
+
"model.language_model.layers.7.self_attn.k_proj",
|
| 219 |
+
"model.language_model.layers.11.mlp.up_proj",
|
| 220 |
+
"model.language_model.layers.20.self_attn.k_proj",
|
| 221 |
+
"model.language_model.layers.6.mlp.up_proj",
|
| 222 |
+
"model.language_model.layers.15.self_attn.k_proj",
|
| 223 |
+
"model.language_model.layers.19.mlp.up_proj",
|
| 224 |
+
"model.language_model.layers.12.self_attn.q_proj",
|
| 225 |
+
"model.language_model.layers.4.self_attn.q_proj",
|
| 226 |
+
"model.language_model.layers.18.self_attn.o_proj",
|
| 227 |
+
"model.language_model.layers.1.self_attn.v_proj",
|
| 228 |
+
"model.language_model.layers.15.self_attn.o_proj",
|
| 229 |
+
"model.language_model.layers.19.self_attn.v_proj",
|
| 230 |
+
"model.language_model.layers.6.self_attn.v_proj",
|
| 231 |
+
"model.language_model.layers.12.self_attn.v_proj",
|
| 232 |
+
"model.language_model.layers.3.self_attn.k_proj",
|
| 233 |
+
"model.language_model.layers.26.self_attn.q_proj",
|
| 234 |
+
"model.language_model.layers.1.self_attn.q_proj",
|
| 235 |
+
"model.language_model.layers.27.self_attn.v_proj",
|
| 236 |
+
"model.language_model.layers.3.self_attn.v_proj",
|
| 237 |
+
"model.language_model.layers.23.self_attn.q_proj"
|
| 238 |
],
|
| 239 |
"target_parameters": null,
|
| 240 |
"task_type": "CAUSAL_LM",
|
checkpoint-363/adapter_model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 37232104
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4527c85eaf804e53a289f91401b432b19a8b6349499a84dafcdb818b609b01a5
|
| 3 |
size 37232104
|
checkpoint-363/optimizer.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b804b2955b030b1c2abd00459acb0ce56cea15fc6e13966e3a73a2e51f70590e
|
| 3 |
+
size 38238223
|
checkpoint-363/rng_state.pth
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 14645
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ae282780d1020b3190a6ba66893846c3b873243e07557d4974d44616c175df20
|
| 3 |
size 14645
|
checkpoint-363/scheduler.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1465
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b4e1367a2173a2cbb4dcde69fb00dd0c19fe22659c816858cfd69bdabf057cea
|
| 3 |
size 1465
|
checkpoint-363/tokenizer.json
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cc8d3a0ce36466ccc1278bf987df5f71db1719b9ca6b4118264f45cb627bfe0f
|
| 3 |
+
size 32169626
|
checkpoint-363/tokenizer_config.json
CHANGED
|
@@ -41,7 +41,7 @@
|
|
| 41 |
"think_token": "<|think|>"
|
| 42 |
},
|
| 43 |
"pad_token": "<pad>",
|
| 44 |
-
"padding_side": "
|
| 45 |
"processor_class": "Gemma4Processor",
|
| 46 |
"response_schema": {
|
| 47 |
"properties": {
|
|
|
|
| 41 |
"think_token": "<|think|>"
|
| 42 |
},
|
| 43 |
"pad_token": "<pad>",
|
| 44 |
+
"padding_side": "right",
|
| 45 |
"processor_class": "Gemma4Processor",
|
| 46 |
"response_schema": {
|
| 47 |
"properties": {
|
checkpoint-363/trainer_state.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
{
|
| 2 |
"best_global_step": 363,
|
| 3 |
-
"best_metric": 0.
|
| 4 |
-
"best_model_checkpoint": "/
|
| 5 |
"epoch": 3.0,
|
| 6 |
"eval_steps": 500,
|
| 7 |
"global_step": 363,
|
|
@@ -10,396 +10,396 @@
|
|
| 10 |
"is_world_process_zero": true,
|
| 11 |
"log_history": [
|
| 12 |
{
|
| 13 |
-
"entropy": 1.
|
| 14 |
"epoch": 0.0827300930713547,
|
| 15 |
-
"grad_norm":
|
| 16 |
"learning_rate": 1.8e-05,
|
| 17 |
-
"loss":
|
| 18 |
-
"mean_token_accuracy": 0.
|
| 19 |
-
"num_tokens":
|
| 20 |
"step": 10
|
| 21 |
},
|
| 22 |
{
|
| 23 |
-
"entropy": 0.
|
| 24 |
"epoch": 0.1654601861427094,
|
| 25 |
-
"grad_norm":
|
| 26 |
"learning_rate": 3.8e-05,
|
| 27 |
-
"loss":
|
| 28 |
-
"mean_token_accuracy": 0.
|
| 29 |
-
"num_tokens":
|
| 30 |
"step": 20
|
| 31 |
},
|
| 32 |
{
|
| 33 |
-
"entropy": 0.
|
| 34 |
"epoch": 0.2481902792140641,
|
| 35 |
-
"grad_norm":
|
| 36 |
"learning_rate": 5.8e-05,
|
| 37 |
-
"loss":
|
| 38 |
-
"mean_token_accuracy": 0.
|
| 39 |
-
"num_tokens":
|
| 40 |
"step": 30
|
| 41 |
},
|
| 42 |
{
|
| 43 |
-
"entropy":
|
| 44 |
"epoch": 0.3309203722854188,
|
| 45 |
-
"grad_norm":
|
| 46 |
"learning_rate": 7.800000000000001e-05,
|
| 47 |
-
"loss":
|
| 48 |
-
"mean_token_accuracy": 0.
|
| 49 |
-
"num_tokens":
|
| 50 |
"step": 40
|
| 51 |
},
|
| 52 |
{
|
| 53 |
-
"entropy":
|
| 54 |
"epoch": 0.4136504653567735,
|
| 55 |
-
"grad_norm":
|
| 56 |
"learning_rate": 9.8e-05,
|
| 57 |
-
"loss":
|
| 58 |
-
"mean_token_accuracy": 0.
|
| 59 |
-
"num_tokens":
|
| 60 |
"step": 50
|
| 61 |
},
|
| 62 |
{
|
| 63 |
-
"entropy": 0.
|
| 64 |
"epoch": 0.4963805584281282,
|
| 65 |
-
"grad_norm":
|
| 66 |
"learning_rate": 0.000118,
|
| 67 |
-
"loss":
|
| 68 |
-
"mean_token_accuracy": 0.
|
| 69 |
-
"num_tokens":
|
| 70 |
"step": 60
|
| 71 |
},
|
| 72 |
{
|
| 73 |
-
"entropy": 0.
|
| 74 |
"epoch": 0.5791106514994829,
|
| 75 |
-
"grad_norm":
|
| 76 |
"learning_rate": 0.000138,
|
| 77 |
-
"loss":
|
| 78 |
-
"mean_token_accuracy": 0.
|
| 79 |
-
"num_tokens":
|
| 80 |
"step": 70
|
| 81 |
},
|
| 82 |
{
|
| 83 |
-
"entropy": 0.
|
| 84 |
"epoch": 0.6618407445708376,
|
| 85 |
-
"grad_norm":
|
| 86 |
"learning_rate": 0.00015800000000000002,
|
| 87 |
-
"loss":
|
| 88 |
-
"mean_token_accuracy": 0.
|
| 89 |
-
"num_tokens":
|
| 90 |
"step": 80
|
| 91 |
},
|
| 92 |
{
|
| 93 |
-
"entropy": 0.
|
| 94 |
"epoch": 0.7445708376421923,
|
| 95 |
-
"grad_norm":
|
| 96 |
"learning_rate": 0.00017800000000000002,
|
| 97 |
-
"loss":
|
| 98 |
-
"mean_token_accuracy": 0.
|
| 99 |
-
"num_tokens":
|
| 100 |
"step": 90
|
| 101 |
},
|
| 102 |
{
|
| 103 |
-
"entropy": 0.
|
| 104 |
"epoch": 0.827300930713547,
|
| 105 |
-
"grad_norm": 0.
|
| 106 |
"learning_rate": 0.00019800000000000002,
|
| 107 |
-
"loss":
|
| 108 |
-
"mean_token_accuracy": 0.
|
| 109 |
-
"num_tokens":
|
| 110 |
"step": 100
|
| 111 |
},
|
| 112 |
{
|
| 113 |
-
"entropy": 0.
|
| 114 |
"epoch": 0.9100310237849017,
|
| 115 |
-
"grad_norm": 0.
|
| 116 |
"learning_rate": 0.00019942266891397815,
|
| 117 |
-
"loss":
|
| 118 |
-
"mean_token_accuracy": 0.
|
| 119 |
-
"num_tokens":
|
| 120 |
"step": 110
|
| 121 |
},
|
| 122 |
{
|
| 123 |
-
"entropy": 0.
|
| 124 |
"epoch": 0.9927611168562565,
|
| 125 |
-
"grad_norm": 0.
|
| 126 |
"learning_rate": 0.00019743551343638324,
|
| 127 |
-
"loss":
|
| 128 |
-
"mean_token_accuracy": 0.
|
| 129 |
-
"num_tokens":
|
| 130 |
"step": 120
|
| 131 |
},
|
| 132 |
{
|
| 133 |
"epoch": 1.0,
|
| 134 |
-
"eval_entropy": 0.
|
| 135 |
-
"eval_loss": 0.
|
| 136 |
-
"eval_mean_token_accuracy": 0.
|
| 137 |
-
"eval_num_tokens":
|
| 138 |
-
"eval_runtime":
|
| 139 |
-
"eval_samples_per_second":
|
| 140 |
-
"eval_steps_per_second":
|
| 141 |
"step": 121
|
| 142 |
},
|
| 143 |
{
|
| 144 |
-
"entropy": 0.
|
| 145 |
"epoch": 1.0744570837642193,
|
| 146 |
-
"grad_norm": 0.
|
| 147 |
"learning_rate": 0.00019405971991583108,
|
| 148 |
-
"loss":
|
| 149 |
-
"mean_token_accuracy": 0.
|
| 150 |
-
"num_tokens":
|
| 151 |
"step": 130
|
| 152 |
},
|
| 153 |
{
|
| 154 |
-
"entropy": 0.
|
| 155 |
"epoch": 1.157187176835574,
|
| 156 |
-
"grad_norm": 0.
|
| 157 |
"learning_rate": 0.00018934339971482674,
|
| 158 |
-
"loss":
|
| 159 |
-
"mean_token_accuracy": 0.
|
| 160 |
-
"num_tokens":
|
| 161 |
"step": 140
|
| 162 |
},
|
| 163 |
{
|
| 164 |
-
"entropy": 0.
|
| 165 |
"epoch": 1.2399172699069285,
|
| 166 |
-
"grad_norm":
|
| 167 |
"learning_rate": 0.00018335376920472097,
|
| 168 |
-
"loss":
|
| 169 |
-
"mean_token_accuracy": 0.
|
| 170 |
-
"num_tokens":
|
| 171 |
"step": 150
|
| 172 |
},
|
| 173 |
{
|
| 174 |
-
"entropy": 0.
|
| 175 |
"epoch": 1.3226473629782833,
|
| 176 |
-
"grad_norm": 0.
|
| 177 |
"learning_rate": 0.00017617619180688085,
|
| 178 |
-
"loss":
|
| 179 |
-
"mean_token_accuracy": 0.
|
| 180 |
-
"num_tokens":
|
| 181 |
"step": 160
|
| 182 |
},
|
| 183 |
{
|
| 184 |
-
"entropy": 0.
|
| 185 |
"epoch": 1.4053774560496382,
|
| 186 |
-
"grad_norm": 0.
|
| 187 |
"learning_rate": 0.00016791296140450545,
|
| 188 |
-
"loss":
|
| 189 |
-
"mean_token_accuracy": 0.
|
| 190 |
-
"num_tokens":
|
| 191 |
"step": 170
|
| 192 |
},
|
| 193 |
{
|
| 194 |
-
"entropy": 0.
|
| 195 |
"epoch": 1.4881075491209927,
|
| 196 |
-
"grad_norm": 0.
|
| 197 |
"learning_rate": 0.0001586818444637402,
|
| 198 |
-
"loss":
|
| 199 |
-
"mean_token_accuracy": 0.
|
| 200 |
-
"num_tokens":
|
| 201 |
"step": 180
|
| 202 |
},
|
| 203 |
{
|
| 204 |
-
"entropy": 0.
|
| 205 |
"epoch": 1.5708376421923473,
|
| 206 |
-
"grad_norm": 0.
|
| 207 |
"learning_rate": 0.0001486144016415862,
|
| 208 |
-
"loss":
|
| 209 |
-
"mean_token_accuracy": 0.
|
| 210 |
-
"num_tokens":
|
| 211 |
"step": 190
|
| 212 |
},
|
| 213 |
{
|
| 214 |
-
"entropy": 0.
|
| 215 |
"epoch": 1.6535677352637022,
|
| 216 |
-
"grad_norm": 0.
|
| 217 |
"learning_rate": 0.00013785411280082746,
|
| 218 |
-
"loss":
|
| 219 |
-
"mean_token_accuracy": 0.
|
| 220 |
-
"num_tokens":
|
| 221 |
"step": 200
|
| 222 |
},
|
| 223 |
{
|
| 224 |
-
"entropy": 0.
|
| 225 |
"epoch": 1.736297828335057,
|
| 226 |
-
"grad_norm": 0.
|
| 227 |
"learning_rate": 0.00012655433215401438,
|
| 228 |
-
"loss":
|
| 229 |
-
"mean_token_accuracy": 0.
|
| 230 |
-
"num_tokens":
|
| 231 |
"step": 210
|
| 232 |
},
|
| 233 |
{
|
| 234 |
-
"entropy": 0.
|
| 235 |
"epoch": 1.8190279214064116,
|
| 236 |
-
"grad_norm": 0.
|
| 237 |
"learning_rate": 0.00011487610267952142,
|
| 238 |
-
"loss":
|
| 239 |
-
"mean_token_accuracy": 0.
|
| 240 |
-
"num_tokens":
|
| 241 |
"step": 220
|
| 242 |
},
|
| 243 |
{
|
| 244 |
-
"entropy": 0.
|
| 245 |
"epoch": 1.9017580144777662,
|
| 246 |
-
"grad_norm": 0.
|
| 247 |
"learning_rate": 0.00010298586095833151,
|
| 248 |
-
"loss":
|
| 249 |
-
"mean_token_accuracy": 0.
|
| 250 |
-
"num_tokens":
|
| 251 |
"step": 230
|
| 252 |
},
|
| 253 |
{
|
| 254 |
-
"entropy": 0.
|
| 255 |
"epoch": 1.984488107549121,
|
| 256 |
-
"grad_norm": 0.
|
| 257 |
"learning_rate": 9.10530651419099e-05,
|
| 258 |
-
"loss":
|
| 259 |
-
"mean_token_accuracy": 0.
|
| 260 |
-
"num_tokens":
|
| 261 |
"step": 240
|
| 262 |
},
|
| 263 |
{
|
| 264 |
"epoch": 2.0,
|
| 265 |
-
"eval_entropy": 0.
|
| 266 |
-
"eval_loss": 0.
|
| 267 |
-
"eval_mean_token_accuracy": 0.
|
| 268 |
-
"eval_num_tokens":
|
| 269 |
-
"eval_runtime":
|
| 270 |
-
"eval_samples_per_second":
|
| 271 |
-
"eval_steps_per_second":
|
| 272 |
"step": 242
|
| 273 |
},
|
| 274 |
{
|
| 275 |
-
"entropy": 0.
|
| 276 |
"epoch": 2.066184074457084,
|
| 277 |
-
"grad_norm": 0.
|
| 278 |
"learning_rate": 7.924777985705556e-05,
|
| 279 |
-
"loss":
|
| 280 |
-
"mean_token_accuracy": 0.
|
| 281 |
-
"num_tokens":
|
| 282 |
"step": 250
|
| 283 |
},
|
| 284 |
{
|
| 285 |
-
"entropy": 0.
|
| 286 |
"epoch": 2.1489141675284387,
|
| 287 |
-
"grad_norm": 0.
|
| 288 |
"learning_rate": 6.773825246734622e-05,
|
| 289 |
-
"loss":
|
| 290 |
-
"mean_token_accuracy": 0.
|
| 291 |
-
"num_tokens":
|
| 292 |
"step": 260
|
| 293 |
},
|
| 294 |
{
|
| 295 |
-
"entropy": 0.
|
| 296 |
"epoch": 2.231644260599793,
|
| 297 |
-
"grad_norm": 0.
|
| 298 |
"learning_rate": 5.668851523397829e-05,
|
| 299 |
-
"loss":
|
| 300 |
-
"mean_token_accuracy": 0.
|
| 301 |
-
"num_tokens":
|
| 302 |
"step": 270
|
| 303 |
},
|
| 304 |
{
|
| 305 |
-
"entropy": 0.
|
| 306 |
"epoch": 2.314374353671148,
|
| 307 |
-
"grad_norm": 0.
|
| 308 |
"learning_rate": 4.625604754968839e-05,
|
| 309 |
-
"loss":
|
| 310 |
-
"mean_token_accuracy": 0.
|
| 311 |
-
"num_tokens":
|
| 312 |
"step": 280
|
| 313 |
},
|
| 314 |
{
|
| 315 |
-
"entropy": 0.
|
| 316 |
"epoch": 2.3971044467425027,
|
| 317 |
-
"grad_norm": 0.
|
| 318 |
"learning_rate": 3.658953156328857e-05,
|
| 319 |
-
"loss":
|
| 320 |
-
"mean_token_accuracy": 0.
|
| 321 |
-
"num_tokens":
|
| 322 |
"step": 290
|
| 323 |
},
|
| 324 |
{
|
| 325 |
-
"entropy": 0.
|
| 326 |
"epoch": 2.479834539813857,
|
| 327 |
-
"grad_norm":
|
| 328 |
"learning_rate": 2.7826733181357932e-05,
|
| 329 |
-
"loss":
|
| 330 |
-
"mean_token_accuracy": 0.
|
| 331 |
-
"num_tokens":
|
| 332 |
"step": 300
|
| 333 |
},
|
| 334 |
{
|
| 335 |
-
"entropy": 0.
|
| 336 |
"epoch": 2.562564632885212,
|
| 337 |
-
"grad_norm": 0.
|
| 338 |
"learning_rate": 2.0092538646774072e-05,
|
| 339 |
-
"loss":
|
| 340 |
-
"mean_token_accuracy": 0.
|
| 341 |
-
"num_tokens":
|
| 342 |
"step": 310
|
| 343 |
},
|
| 344 |
{
|
| 345 |
-
"entropy": 0.
|
| 346 |
"epoch": 2.6452947259565667,
|
| 347 |
-
"grad_norm": 0.
|
| 348 |
"learning_rate": 1.3497174676506674e-05,
|
| 349 |
-
"loss":
|
| 350 |
-
"mean_token_accuracy": 0.
|
| 351 |
-
"num_tokens":
|
| 352 |
"step": 320
|
| 353 |
},
|
| 354 |
{
|
| 355 |
-
"entropy": 0.
|
| 356 |
"epoch": 2.7280248190279215,
|
| 357 |
-
"grad_norm": 0.
|
| 358 |
"learning_rate": 8.134637525034839e-06,
|
| 359 |
-
"loss":
|
| 360 |
-
"mean_token_accuracy": 0.
|
| 361 |
-
"num_tokens":
|
| 362 |
"step": 330
|
| 363 |
},
|
| 364 |
{
|
| 365 |
-
"entropy": 0.
|
| 366 |
"epoch": 2.8107549120992763,
|
| 367 |
-
"grad_norm": 0.
|
| 368 |
"learning_rate": 4.081353362167406e-06,
|
| 369 |
-
"loss":
|
| 370 |
-
"mean_token_accuracy": 0.
|
| 371 |
-
"num_tokens":
|
| 372 |
"step": 340
|
| 373 |
},
|
| 374 |
{
|
| 375 |
-
"entropy": 0.
|
| 376 |
"epoch": 2.8934850051706307,
|
| 377 |
-
"grad_norm": 0.
|
| 378 |
"learning_rate": 1.3950890573852126e-06,
|
| 379 |
-
"loss":
|
| 380 |
-
"mean_token_accuracy": 0.
|
| 381 |
-
"num_tokens":
|
| 382 |
"step": 350
|
| 383 |
},
|
| 384 |
{
|
| 385 |
-
"entropy": 0.
|
| 386 |
"epoch": 2.9762150982419855,
|
| 387 |
-
"grad_norm": 0.
|
| 388 |
"learning_rate": 1.1412889406192673e-07,
|
| 389 |
-
"loss":
|
| 390 |
-
"mean_token_accuracy": 0.
|
| 391 |
-
"num_tokens":
|
| 392 |
"step": 360
|
| 393 |
},
|
| 394 |
{
|
| 395 |
"epoch": 3.0,
|
| 396 |
-
"eval_entropy": 0.
|
| 397 |
-
"eval_loss": 0.
|
| 398 |
-
"eval_mean_token_accuracy": 0.
|
| 399 |
-
"eval_num_tokens":
|
| 400 |
-
"eval_runtime":
|
| 401 |
-
"eval_samples_per_second":
|
| 402 |
-
"eval_steps_per_second":
|
| 403 |
"step": 363
|
| 404 |
}
|
| 405 |
],
|
|
@@ -420,7 +420,7 @@
|
|
| 420 |
"attributes": {}
|
| 421 |
}
|
| 422 |
},
|
| 423 |
-
"total_flos":
|
| 424 |
"train_batch_size": 1,
|
| 425 |
"trial_name": null,
|
| 426 |
"trial_params": null
|
|
|
|
| 1 |
{
|
| 2 |
"best_global_step": 363,
|
| 3 |
+
"best_metric": 0.43587613105773926,
|
| 4 |
+
"best_model_checkpoint": "/workspace/gemma4-26b-securecode/checkpoint-363",
|
| 5 |
"epoch": 3.0,
|
| 6 |
"eval_steps": 500,
|
| 7 |
"global_step": 363,
|
|
|
|
| 10 |
"is_world_process_zero": true,
|
| 11 |
"log_history": [
|
| 12 |
{
|
| 13 |
+
"entropy": 1.0907821020111441,
|
| 14 |
"epoch": 0.0827300930713547,
|
| 15 |
+
"grad_norm": 20.875,
|
| 16 |
"learning_rate": 1.8e-05,
|
| 17 |
+
"loss": 80.26775512695312,
|
| 18 |
+
"mean_token_accuracy": 0.4542873948812485,
|
| 19 |
+
"num_tokens": 326185.0,
|
| 20 |
"step": 10
|
| 21 |
},
|
| 22 |
{
|
| 23 |
+
"entropy": 0.8271314173936843,
|
| 24 |
"epoch": 0.1654601861427094,
|
| 25 |
+
"grad_norm": 8.75,
|
| 26 |
"learning_rate": 3.8e-05,
|
| 27 |
+
"loss": 58.08096923828125,
|
| 28 |
+
"mean_token_accuracy": 0.5611657274886965,
|
| 29 |
+
"num_tokens": 653865.0,
|
| 30 |
"step": 20
|
| 31 |
},
|
| 32 |
{
|
| 33 |
+
"entropy": 0.4787554959766567,
|
| 34 |
"epoch": 0.2481902792140641,
|
| 35 |
+
"grad_norm": 1.7109375,
|
| 36 |
"learning_rate": 5.8e-05,
|
| 37 |
+
"loss": 25.493240356445312,
|
| 38 |
+
"mean_token_accuracy": 0.7378443486988544,
|
| 39 |
+
"num_tokens": 981337.0,
|
| 40 |
"step": 30
|
| 41 |
},
|
| 42 |
{
|
| 43 |
+
"entropy": 0.7855595085769892,
|
| 44 |
"epoch": 0.3309203722854188,
|
| 45 |
+
"grad_norm": 0.8671875,
|
| 46 |
"learning_rate": 7.800000000000001e-05,
|
| 47 |
+
"loss": 14.629072570800782,
|
| 48 |
+
"mean_token_accuracy": 0.7917733617126942,
|
| 49 |
+
"num_tokens": 1308584.0,
|
| 50 |
"step": 40
|
| 51 |
},
|
| 52 |
{
|
| 53 |
+
"entropy": 0.7569877350702882,
|
| 54 |
"epoch": 0.4136504653567735,
|
| 55 |
+
"grad_norm": 2.109375,
|
| 56 |
"learning_rate": 9.8e-05,
|
| 57 |
+
"loss": 12.609142303466797,
|
| 58 |
+
"mean_token_accuracy": 0.8013272784650326,
|
| 59 |
+
"num_tokens": 1635098.0,
|
| 60 |
"step": 50
|
| 61 |
},
|
| 62 |
{
|
| 63 |
+
"entropy": 0.6735223602503538,
|
| 64 |
"epoch": 0.4963805584281282,
|
| 65 |
+
"grad_norm": 16.875,
|
| 66 |
"learning_rate": 0.000118,
|
| 67 |
+
"loss": 10.704925537109375,
|
| 68 |
+
"mean_token_accuracy": 0.8209844313561916,
|
| 69 |
+
"num_tokens": 1962302.0,
|
| 70 |
"step": 60
|
| 71 |
},
|
| 72 |
{
|
| 73 |
+
"entropy": 0.6005677949637175,
|
| 74 |
"epoch": 0.5791106514994829,
|
| 75 |
+
"grad_norm": 1.546875,
|
| 76 |
"learning_rate": 0.000138,
|
| 77 |
+
"loss": 9.783185577392578,
|
| 78 |
+
"mean_token_accuracy": 0.8308866504579783,
|
| 79 |
+
"num_tokens": 2289982.0,
|
| 80 |
"step": 70
|
| 81 |
},
|
| 82 |
{
|
| 83 |
+
"entropy": 0.5877057909965515,
|
| 84 |
"epoch": 0.6618407445708376,
|
| 85 |
+
"grad_norm": 11.25,
|
| 86 |
"learning_rate": 0.00015800000000000002,
|
| 87 |
+
"loss": 9.298844909667968,
|
| 88 |
+
"mean_token_accuracy": 0.8359990835189819,
|
| 89 |
+
"num_tokens": 2616786.0,
|
| 90 |
"step": 80
|
| 91 |
},
|
| 92 |
{
|
| 93 |
+
"entropy": 0.5447238819673658,
|
| 94 |
"epoch": 0.7445708376421923,
|
| 95 |
+
"grad_norm": 1.2890625,
|
| 96 |
"learning_rate": 0.00017800000000000002,
|
| 97 |
+
"loss": 8.777264404296876,
|
| 98 |
+
"mean_token_accuracy": 0.8440194871276617,
|
| 99 |
+
"num_tokens": 2941975.0,
|
| 100 |
"step": 90
|
| 101 |
},
|
| 102 |
{
|
| 103 |
+
"entropy": 0.5323287105187774,
|
| 104 |
"epoch": 0.827300930713547,
|
| 105 |
+
"grad_norm": 0.70703125,
|
| 106 |
"learning_rate": 0.00019800000000000002,
|
| 107 |
+
"loss": 8.489185333251953,
|
| 108 |
+
"mean_token_accuracy": 0.8486687760800123,
|
| 109 |
+
"num_tokens": 3269655.0,
|
| 110 |
"step": 100
|
| 111 |
},
|
| 112 |
{
|
| 113 |
+
"entropy": 0.4949887519702315,
|
| 114 |
"epoch": 0.9100310237849017,
|
| 115 |
+
"grad_norm": 0.439453125,
|
| 116 |
"learning_rate": 0.00019942266891397815,
|
| 117 |
+
"loss": 8.192723083496094,
|
| 118 |
+
"mean_token_accuracy": 0.8528529018163681,
|
| 119 |
+
"num_tokens": 3595193.0,
|
| 120 |
"step": 110
|
| 121 |
},
|
| 122 |
{
|
| 123 |
+
"entropy": 0.4980895221233368,
|
| 124 |
"epoch": 0.9927611168562565,
|
| 125 |
+
"grad_norm": 0.921875,
|
| 126 |
"learning_rate": 0.00019743551343638324,
|
| 127 |
+
"loss": 7.908926391601563,
|
| 128 |
+
"mean_token_accuracy": 0.8567473825067282,
|
| 129 |
+
"num_tokens": 3922475.0,
|
| 130 |
"step": 120
|
| 131 |
},
|
| 132 |
{
|
| 133 |
"epoch": 1.0,
|
| 134 |
+
"eval_entropy": 0.5629051625728607,
|
| 135 |
+
"eval_loss": 0.4985087513923645,
|
| 136 |
+
"eval_mean_token_accuracy": 0.8571729124978531,
|
| 137 |
+
"eval_num_tokens": 3949618.0,
|
| 138 |
+
"eval_runtime": 122.3216,
|
| 139 |
+
"eval_samples_per_second": 1.758,
|
| 140 |
+
"eval_steps_per_second": 1.758,
|
| 141 |
"step": 121
|
| 142 |
},
|
| 143 |
{
|
| 144 |
+
"entropy": 0.5185692432937743,
|
| 145 |
"epoch": 1.0744570837642193,
|
| 146 |
+
"grad_norm": 0.37890625,
|
| 147 |
"learning_rate": 0.00019405971991583108,
|
| 148 |
+
"loss": 7.807546997070313,
|
| 149 |
+
"mean_token_accuracy": 0.8577670869947989,
|
| 150 |
+
"num_tokens": 4244530.0,
|
| 151 |
"step": 130
|
| 152 |
},
|
| 153 |
{
|
| 154 |
+
"entropy": 0.4555334035307169,
|
| 155 |
"epoch": 1.157187176835574,
|
| 156 |
+
"grad_norm": 0.1953125,
|
| 157 |
"learning_rate": 0.00018934339971482674,
|
| 158 |
+
"loss": 7.464869689941406,
|
| 159 |
+
"mean_token_accuracy": 0.8638800706714391,
|
| 160 |
+
"num_tokens": 4572210.0,
|
| 161 |
"step": 140
|
| 162 |
},
|
| 163 |
{
|
| 164 |
+
"entropy": 0.47754106651991607,
|
| 165 |
"epoch": 1.2399172699069285,
|
| 166 |
+
"grad_norm": 1.21875,
|
| 167 |
"learning_rate": 0.00018335376920472097,
|
| 168 |
+
"loss": 7.764054870605468,
|
| 169 |
+
"mean_token_accuracy": 0.8579531148076057,
|
| 170 |
+
"num_tokens": 4897694.0,
|
| 171 |
"step": 150
|
| 172 |
},
|
| 173 |
{
|
| 174 |
+
"entropy": 0.4550897226668894,
|
| 175 |
"epoch": 1.3226473629782833,
|
| 176 |
+
"grad_norm": 0.28515625,
|
| 177 |
"learning_rate": 0.00017617619180688085,
|
| 178 |
+
"loss": 7.322466278076172,
|
| 179 |
+
"mean_token_accuracy": 0.8654993120580912,
|
| 180 |
+
"num_tokens": 5223716.0,
|
| 181 |
"step": 160
|
| 182 |
},
|
| 183 |
{
|
| 184 |
+
"entropy": 0.4635292864404619,
|
| 185 |
"epoch": 1.4053774560496382,
|
| 186 |
+
"grad_norm": 0.8203125,
|
| 187 |
"learning_rate": 0.00016791296140450545,
|
| 188 |
+
"loss": 7.322091674804687,
|
| 189 |
+
"mean_token_accuracy": 0.8659177150577306,
|
| 190 |
+
"num_tokens": 5550963.0,
|
| 191 |
"step": 170
|
| 192 |
},
|
| 193 |
{
|
| 194 |
+
"entropy": 0.46733071468770504,
|
| 195 |
"epoch": 1.4881075491209927,
|
| 196 |
+
"grad_norm": 0.45703125,
|
| 197 |
"learning_rate": 0.0001586818444637402,
|
| 198 |
+
"loss": 7.3169700622558596,
|
| 199 |
+
"mean_token_accuracy": 0.8659562785178423,
|
| 200 |
+
"num_tokens": 5878643.0,
|
| 201 |
"step": 180
|
| 202 |
},
|
| 203 |
{
|
| 204 |
+
"entropy": 0.4562882795929909,
|
| 205 |
"epoch": 1.5708376421923473,
|
| 206 |
+
"grad_norm": 0.302734375,
|
| 207 |
"learning_rate": 0.0001486144016415862,
|
| 208 |
+
"loss": 7.291434478759766,
|
| 209 |
+
"mean_token_accuracy": 0.8656417932361364,
|
| 210 |
+
"num_tokens": 6206323.0,
|
| 211 |
"step": 190
|
| 212 |
},
|
| 213 |
{
|
| 214 |
+
"entropy": 0.4341404400765896,
|
| 215 |
"epoch": 1.6535677352637022,
|
| 216 |
+
"grad_norm": 0.30859375,
|
| 217 |
"learning_rate": 0.00013785411280082746,
|
| 218 |
+
"loss": 6.904853057861328,
|
| 219 |
+
"mean_token_accuracy": 0.8713251128792763,
|
| 220 |
+
"num_tokens": 6533527.0,
|
| 221 |
"step": 200
|
| 222 |
},
|
| 223 |
{
|
| 224 |
+
"entropy": 0.4494163889437914,
|
| 225 |
"epoch": 1.736297828335057,
|
| 226 |
+
"grad_norm": 0.2177734375,
|
| 227 |
"learning_rate": 0.00012655433215401438,
|
| 228 |
+
"loss": 7.195234680175782,
|
| 229 |
+
"mean_token_accuracy": 0.8673424527049065,
|
| 230 |
+
"num_tokens": 6861207.0,
|
| 231 |
"step": 210
|
| 232 |
},
|
| 233 |
{
|
| 234 |
+
"entropy": 0.46514057284221055,
|
| 235 |
"epoch": 1.8190279214064116,
|
| 236 |
+
"grad_norm": 0.220703125,
|
| 237 |
"learning_rate": 0.00011487610267952142,
|
| 238 |
+
"loss": 7.431344604492187,
|
| 239 |
+
"mean_token_accuracy": 0.8633232209831476,
|
| 240 |
+
"num_tokens": 7188281.0,
|
| 241 |
"step": 220
|
| 242 |
},
|
| 243 |
{
|
| 244 |
+
"entropy": 0.43800092255696654,
|
| 245 |
"epoch": 1.9017580144777662,
|
| 246 |
+
"grad_norm": 0.1962890625,
|
| 247 |
"learning_rate": 0.00010298586095833151,
|
| 248 |
+
"loss": 7.023017883300781,
|
| 249 |
+
"mean_token_accuracy": 0.8693898901343345,
|
| 250 |
+
"num_tokens": 7513788.0,
|
| 251 |
"step": 230
|
| 252 |
},
|
| 253 |
{
|
| 254 |
+
"entropy": 0.44280060222372414,
|
| 255 |
"epoch": 1.984488107549121,
|
| 256 |
+
"grad_norm": 0.453125,
|
| 257 |
"learning_rate": 9.10530651419099e-05,
|
| 258 |
+
"loss": 7.070135498046875,
|
| 259 |
+
"mean_token_accuracy": 0.8684426795691251,
|
| 260 |
+
"num_tokens": 7839097.0,
|
| 261 |
"step": 240
|
| 262 |
},
|
| 263 |
{
|
| 264 |
"epoch": 2.0,
|
| 265 |
+
"eval_entropy": 0.43673129948072653,
|
| 266 |
+
"eval_loss": 0.44326454401016235,
|
| 267 |
+
"eval_mean_token_accuracy": 0.8689799866010977,
|
| 268 |
+
"eval_num_tokens": 7899236.0,
|
| 269 |
+
"eval_runtime": 122.7611,
|
| 270 |
+
"eval_samples_per_second": 1.751,
|
| 271 |
+
"eval_steps_per_second": 1.751,
|
| 272 |
"step": 242
|
| 273 |
},
|
| 274 |
{
|
| 275 |
+
"entropy": 0.39076420287542707,
|
| 276 |
"epoch": 2.066184074457084,
|
| 277 |
+
"grad_norm": 0.2421875,
|
| 278 |
"learning_rate": 7.924777985705556e-05,
|
| 279 |
+
"loss": 6.180745697021484,
|
| 280 |
+
"mean_token_accuracy": 0.8815464007703564,
|
| 281 |
+
"num_tokens": 8159427.0,
|
| 282 |
"step": 250
|
| 283 |
},
|
| 284 |
{
|
| 285 |
+
"entropy": 0.4168915188405663,
|
| 286 |
"epoch": 2.1489141675284387,
|
| 287 |
+
"grad_norm": 0.1962890625,
|
| 288 |
"learning_rate": 6.773825246734622e-05,
|
| 289 |
+
"loss": 6.647556304931641,
|
| 290 |
+
"mean_token_accuracy": 0.8745267510414123,
|
| 291 |
+
"num_tokens": 8487107.0,
|
| 292 |
"step": 260
|
| 293 |
},
|
| 294 |
{
|
| 295 |
+
"entropy": 0.42043258836492897,
|
| 296 |
"epoch": 2.231644260599793,
|
| 297 |
+
"grad_norm": 0.2041015625,
|
| 298 |
"learning_rate": 5.668851523397829e-05,
|
| 299 |
+
"loss": 6.715694427490234,
|
| 300 |
+
"mean_token_accuracy": 0.8731884736567735,
|
| 301 |
+
"num_tokens": 8814579.0,
|
| 302 |
"step": 270
|
| 303 |
},
|
| 304 |
{
|
| 305 |
+
"entropy": 0.4163642665371299,
|
| 306 |
"epoch": 2.314374353671148,
|
| 307 |
+
"grad_norm": 0.1884765625,
|
| 308 |
"learning_rate": 4.625604754968839e-05,
|
| 309 |
+
"loss": 6.608394622802734,
|
| 310 |
+
"mean_token_accuracy": 0.8757493741810322,
|
| 311 |
+
"num_tokens": 9140560.0,
|
| 312 |
"step": 280
|
| 313 |
},
|
| 314 |
{
|
| 315 |
+
"entropy": 0.44139298899099233,
|
| 316 |
"epoch": 2.3971044467425027,
|
| 317 |
+
"grad_norm": 0.3515625,
|
| 318 |
"learning_rate": 3.658953156328857e-05,
|
| 319 |
+
"loss": 7.034827423095703,
|
| 320 |
+
"mean_token_accuracy": 0.8671251021325588,
|
| 321 |
+
"num_tokens": 9466711.0,
|
| 322 |
"step": 290
|
| 323 |
},
|
| 324 |
{
|
| 325 |
+
"entropy": 0.40197266899049283,
|
| 326 |
"epoch": 2.479834539813857,
|
| 327 |
+
"grad_norm": 1.7109375,
|
| 328 |
"learning_rate": 2.7826733181357932e-05,
|
| 329 |
+
"loss": 6.454815673828125,
|
| 330 |
+
"mean_token_accuracy": 0.8788287751376629,
|
| 331 |
+
"num_tokens": 9794391.0,
|
| 332 |
"step": 300
|
| 333 |
},
|
| 334 |
{
|
| 335 |
+
"entropy": 0.4310479393228889,
|
| 336 |
"epoch": 2.562564632885212,
|
| 337 |
+
"grad_norm": 0.388671875,
|
| 338 |
"learning_rate": 2.0092538646774072e-05,
|
| 339 |
+
"loss": 6.927095031738281,
|
| 340 |
+
"mean_token_accuracy": 0.869931609556079,
|
| 341 |
+
"num_tokens": 10122071.0,
|
| 342 |
"step": 310
|
| 343 |
},
|
| 344 |
{
|
| 345 |
+
"entropy": 0.4207622425630689,
|
| 346 |
"epoch": 2.6452947259565667,
|
| 347 |
+
"grad_norm": 0.30078125,
|
| 348 |
"learning_rate": 1.3497174676506674e-05,
|
| 349 |
+
"loss": 6.691123962402344,
|
| 350 |
+
"mean_token_accuracy": 0.8738963160663843,
|
| 351 |
+
"num_tokens": 10448117.0,
|
| 352 |
"step": 320
|
| 353 |
},
|
| 354 |
{
|
| 355 |
+
"entropy": 0.4212987683247775,
|
| 356 |
"epoch": 2.7280248190279215,
|
| 357 |
+
"grad_norm": 0.275390625,
|
| 358 |
"learning_rate": 8.134637525034839e-06,
|
| 359 |
+
"loss": 6.729954528808594,
|
| 360 |
+
"mean_token_accuracy": 0.8721410397440195,
|
| 361 |
+
"num_tokens": 10774291.0,
|
| 362 |
"step": 330
|
| 363 |
},
|
| 364 |
{
|
| 365 |
+
"entropy": 0.41487495419569315,
|
| 366 |
"epoch": 2.8107549120992763,
|
| 367 |
+
"grad_norm": 0.271484375,
|
| 368 |
"learning_rate": 4.081353362167406e-06,
|
| 369 |
+
"loss": 6.5751182556152346,
|
| 370 |
+
"mean_token_accuracy": 0.8759495642036199,
|
| 371 |
+
"num_tokens": 11101971.0,
|
| 372 |
"step": 340
|
| 373 |
},
|
| 374 |
{
|
| 375 |
+
"entropy": 0.4179635870270431,
|
| 376 |
"epoch": 2.8934850051706307,
|
| 377 |
+
"grad_norm": 0.63671875,
|
| 378 |
"learning_rate": 1.3950890573852126e-06,
|
| 379 |
+
"loss": 6.681074523925782,
|
| 380 |
+
"mean_token_accuracy": 0.8738733492791653,
|
| 381 |
+
"num_tokens": 11429651.0,
|
| 382 |
"step": 350
|
| 383 |
},
|
| 384 |
{
|
| 385 |
+
"entropy": 0.40078685265034436,
|
| 386 |
"epoch": 2.9762150982419855,
|
| 387 |
+
"grad_norm": 0.267578125,
|
| 388 |
"learning_rate": 1.1412889406192673e-07,
|
| 389 |
+
"loss": 6.333649444580078,
|
| 390 |
+
"mean_token_accuracy": 0.8798086743801832,
|
| 391 |
+
"num_tokens": 11754646.0,
|
| 392 |
"step": 360
|
| 393 |
},
|
| 394 |
{
|
| 395 |
"epoch": 3.0,
|
| 396 |
+
"eval_entropy": 0.41775747471770575,
|
| 397 |
+
"eval_loss": 0.43587613105773926,
|
| 398 |
+
"eval_mean_token_accuracy": 0.8707626212474912,
|
| 399 |
+
"eval_num_tokens": 11848854.0,
|
| 400 |
+
"eval_runtime": 122.5213,
|
| 401 |
+
"eval_samples_per_second": 1.755,
|
| 402 |
+
"eval_steps_per_second": 1.755,
|
| 403 |
"step": 363
|
| 404 |
}
|
| 405 |
],
|
|
|
|
| 420 |
"attributes": {}
|
| 421 |
}
|
| 422 |
},
|
| 423 |
+
"total_flos": 1.7834649983749484e+18,
|
| 424 |
"train_batch_size": 1,
|
| 425 |
"trial_name": null,
|
| 426 |
"trial_params": null
|
checkpoint-363/training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5713
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5bdf412780aad6b6bc055248dd1640e0d1a2282e1c11f28390eac7fae5fae303
|
| 3 |
size 5713
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5713
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5bdf412780aad6b6bc055248dd1640e0d1a2282e1c11f28390eac7fae5fae303
|
| 3 |
size 5713
|