See axolotl config
axolotl version: 0.16.0.dev0
base_model: Qwen/Qwen3-8B
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
chat_template: qwen3
chat_template_kwargs:
enable_thinking: false
datasets:
- path: xiaolesu/OsmosisProofling-SFT-NT
type: alpaca
split: train
test_datasets:
- path: xiaolesu/OsmosisProofling-SFT-NT
type: alpaca
split: validation
output_dir: ./outputs/OsmosisProofling-SFT-NT/
sequence_len: 4096
sample_packing: true
flex_attention: true
flex_attn_compile_kwargs:
dynamic: false
mode: max-autotune-no-cudagraphs
wandb_project: OsmosisProofling-SFT-NT
wandb_entity:
wandb_watch:
wandb_name: qwen3-8b-sft-nt
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 2
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 1e-5
bf16: true
tf32: true
resume_from_checkpoint:
logging_steps: 5
evals_per_epoch: 10
saves_per_epoch: 10
save_total_limit: 3
warmup_ratio: 0.1
weight_decay: 0.0
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_version: 2
fsdp_offload_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: Qwen3DecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
fsdp_reshard_after_forward: true
fsdp_activation_checkpointing: true
special_tokens:
outputs/OsmosisProofling-SFT-NT/
This model is a fine-tuned version of Qwen/Qwen3-8B on the xiaolesu/OsmosisProofling-SFT-NT dataset. It achieves the following results on the evaluation set:
- Loss: 0.3568
- Ppl: 1.4287
- Memory/max Active (gib): 20.12
- Memory/max Allocated (gib): 20.12
- Memory/device Reserved (gib): 34.65
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 33
- training_steps: 332
Training results
| Training Loss | Epoch | Step | Validation Loss | Ppl | Active (gib) | Allocated (gib) | Reserved (gib) |
|---|---|---|---|---|---|---|---|
| No log | 0 | 0 | 1.5619 | 4.7678 | 16.27 | 16.27 | 19.92 |
| 1.2850 | 0.1049 | 17 | 1.0296 | 2.7999 | 20.12 | 20.12 | 33.31 |
| 0.6911 | 0.2099 | 34 | 0.5271 | 1.6940 | 20.12 | 20.12 | 34.65 |
| 0.4396 | 0.3148 | 51 | 0.4369 | 1.5479 | 20.12 | 20.12 | 34.65 |
| 0.4075 | 0.4198 | 68 | 0.4020 | 1.4949 | 20.12 | 20.12 | 34.65 |
| 0.3810 | 0.5247 | 85 | 0.3842 | 1.4685 | 20.12 | 20.12 | 34.65 |
| 0.3712 | 0.6296 | 102 | 0.3751 | 1.4551 | 20.12 | 20.12 | 34.65 |
| 0.3635 | 0.7346 | 119 | 0.3689 | 1.4462 | 20.12 | 20.12 | 34.65 |
| 0.3612 | 0.8395 | 136 | 0.3649 | 1.4403 | 20.12 | 20.12 | 34.65 |
| 0.3710 | 0.9444 | 153 | 0.3626 | 1.4371 | 20.12 | 20.12 | 34.65 |
| 0.3631 | 1.0494 | 170 | 0.3600 | 1.4333 | 20.12 | 20.12 | 34.65 |
| 0.3410 | 1.1543 | 187 | 0.3585 | 1.4311 | 20.12 | 20.12 | 34.65 |
| 0.3333 | 1.2593 | 204 | 0.3576 | 1.4298 | 20.12 | 20.12 | 34.65 |
| 0.3381 | 1.3642 | 221 | 0.3576 | 1.4298 | 20.12 | 20.12 | 34.65 |
| 0.3216 | 1.4691 | 238 | 0.3571 | 1.4292 | 20.12 | 20.12 | 34.65 |
| 0.3253 | 1.5741 | 255 | 0.3569 | 1.4289 | 20.12 | 20.12 | 34.65 |
| 0.3325 | 1.6790 | 272 | 0.3568 | 1.4287 | 20.12 | 20.12 | 34.65 |
| 0.3287 | 1.7840 | 289 | 0.3568 | 1.4288 | 20.12 | 20.12 | 34.65 |
| 0.3301 | 1.8889 | 306 | 0.3569 | 1.4289 | 20.12 | 20.12 | 34.65 |
| 0.3290 | 1.9938 | 323 | 0.3568 | 1.4287 | 20.12 | 20.12 | 34.65 |
Framework versions
- Transformers 5.3.0
- Pytorch 2.9.1+cu128
- Datasets 4.5.0
- Tokenizers 0.22.2
- Downloads last month
- 431