Vitallyy's picture
End of training
bd29bf2 verified
metadata
library_name: peft
license: mit
base_model: microsoft/phi-1_5
tags:
  - generated_from_trainer
model-index:
  - name: phi_1.5_mfr_instruct
    results: []

phi_1.5_mfr_instruct

This model is a fine-tuned version of microsoft/phi-1_5 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4011

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • training_steps: 2000

Training results

Training Loss Epoch Step Validation Loss
3.31 50.0 250 1.2487
0.5629 100.0 500 0.4522
0.2176 150.0 750 0.4012
0.1899 200.0 1000 0.3989
0.1832 250.0 1250 0.3996
0.1801 300.0 1500 0.4006
0.1788 350.0 1750 0.4011
0.1784 400.0 2000 0.4011

Framework versions

  • PEFT 0.14.0
  • Transformers 4.48.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.3.2
  • Tokenizers 0.21.0