--- library_name: transformers base_model: timarni/qwen3_dpo tags: - generated_from_trainer datasets: - timarni/MNLP_intstruction_tuning model-index: - name: outputs/dpo_full_alpaca results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.9.2` ```yaml base_model: timarni/qwen3_dpo # Automatically upload checkpoint and final model to HF # hub_model_id: username/custom_model_name plugins: - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin strict: false chat_template: qwen3 datasets: - path: timarni/MNLP_intstruction_tuning type: alpaca split: train shuffle_merged_datasets: true val_set_size: 0.1 output_dir: ./outputs/dpo_full_alpaca dataset_prepared_path: last_run_prepared sequence_len: 4096 #2048 sample_packing: true # was true -> need to check if it actually learns on the samples or not (better understand te hyperparam and event. install axolotl to debug) eval_sample_packing: true pad_to_sequence_len: true # train_on_inputs: true # NEW # group_by_length: false NEW? # To be sure that no LORA is done adapter: null lora: false merge_lora: false wandb_project: mnlp_project wandb_entity: tim-arni wandb_watch: wandb_name: dpo_full_alpaca_resume_from_ckpt wandb_log_model: gradient_accumulation_steps: 16 # 2 micro_batch_size: 2 # 1 num_epochs: 3 optimizer: adamw_torch lr_scheduler: cosine learning_rate: 0.00005 # 0.00005 # cosine_min_lr_ratio: 0.1 warmup_ratio: 0.05 weight_decay: 0.01 bf16: auto tf32: true gradient_checkpointing: offload gradient_checkpointing_kwargs: use_reentrant: false resume_from_checkpoint: /mloscratch/users/arni/Workspace/mnlp_sft/outputs/dpo_full_alpaca/checkpoint-186 logging_steps: 1 gradient_clipping: 1.0 # or max_grad_norm? flash_attention: true evals_per_epoch: 2 saves_per_epoch: 1 save_total_limit: 20 special_tokens: ```

# outputs/dpo_full_alpaca This model is a fine-tuned version of [timarni/qwen3_dpo](https://huggingface.co/timarni/qwen3_dpo) on the timarni/MNLP_intstruction_tuning dataset. It achieves the following results on the evaluation set: - Loss: 0.1520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 13 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7154 | 0.0107 | 1 | 1.1239 | | 0.1282 | 0.2567 | 24 | 0.2029 | | 0.1105 | 0.5134 | 48 | 0.1860 | | 0.1056 | 0.7701 | 72 | 0.1779 | | 0.1004 | 1.0214 | 96 | 0.1736 | | 0.0912 | 1.2781 | 120 | 0.1643 | | 0.0861 | 1.5348 | 144 | 0.1576 | | 0.0791 | 1.7914 | 168 | 0.1530 | | 0.0751 | 2.0642 | 192 | 0.1510 | | 0.0625 | 2.3209 | 216 | 0.1509 | | 0.0453 | 2.5775 | 240 | 0.1513 | | 0.0426 | 2.8342 | 264 | 0.1520 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.5.1+cu121 - Datasets 3.5.1 - Tokenizers 0.21.1