See axolotl config
axolotl version: 0.13.0.dev0
base_model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
chat_template: tokenizer_default
trust_remote_code: true
special_tokens:
pad_token: "<pad>"
eos_token: "</s>"
datasets:
- path: CrucibleLab/Loki_V2_Cleaned
ds_type: json
type: chat_template
chat_template_strategy: tokenizer_default
field_messages: conversations
message_property_mappings:
role: from
content: value
roles:
user: ["user"]
assistant: ["assistant"]
system: ["system"]
roles_to_train: ["assistant"]
dataset_prepared_path: "last_run_prepared"
output_dir: /workspace/data/24b-Qlora
train_on_inputs: false
shuffle_merged_datasets: true
adapter: qlora
load_in_4bit: true
lora_r: 256
lora_alpha: 256
lora_target_linear: true
peft_use_rslora: true
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
micro_batch_size: 8
gradient_accumulation_steps: 1
num_epochs: 1
lr_scheduler: rex
learning_rate: 2e-5
max_grad_norm: 4.5
save_steps: 1000 # Every 1000 steps
save_total_limit: 10 # Keep last 10 checkpoints
warmup_ratio: 0.05
hub_model_id: CrucibleLab/M3.2-24B-loki-V2
hub_strategy: all_checkpoints
eval_strategy: "no"
bf16: auto
tf32: true
gradient_checkpointing: true
use_reentrant: false
logging_steps: 1
flash_attention: true
optimizer: adamw_8bit
weight_decay: 0.0
save_safetensors: true
wandb_project: M3.2-24B-loki-V2
wandb_entity: CrucibleLabs
wandb_name: M3.2-24B-loki-V2
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false
liger_fused_linear_cross_entropy: false
M3.2-24B-loki-V2
This model is a fine-tuned version of anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only on the CrucibleLab/Loki_V2_Cleaned dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 389
- training_steps: 7797
Training results
Framework versions
- PEFT 0.18.0
- Transformers 4.57.1
- Pytorch 2.8.0+cu128
- Datasets 4.4.1
- Tokenizers 0.22.1
- Downloads last month
- -
Model tree for CrucibleLab-TG/M3.2-24B-loki-V2-chpts
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503