out / README.md
anupk's picture
anupk/askPauladapter
104b0b7 verified
metadata
license: apache-2.0
library_name: peft
tags:
  - generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
  - name: out
    results: []

out

This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 4.3731

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.8286 1.0 326 2.7831
1.7135 2.0 652 2.9345
0.5324 3.0 978 3.2704
0.6332 4.0 1304 3.5215
0.4136 5.0 1630 3.6194
0.6789 6.0 1956 4.0601
0.3095 7.0 2282 3.9619
0.2139 8.0 2608 4.2931
0.2027 9.0 2934 4.3885
0.1184 10.0 3260 4.2185
0.1612 11.0 3586 4.2801
0.2609 12.0 3912 4.4705
0.1564 13.0 4238 4.7184
0.2344 14.0 4564 4.3517
0.4565 15.0 4890 4.7181
0.1623 16.0 5216 4.7855
0.2934 17.0 5542 5.5058
0.1151 18.0 5868 4.6761
0.178 19.0 6194 5.0001
0.1595 20.0 6520 4.3731

Framework versions

  • PEFT 0.8.2
  • Transformers 4.38.1
  • Pytorch 2.0.1
  • Datasets 2.17.1
  • Tokenizers 0.15.2