| | --- |
| | license: apache-2.0 |
| | library_name: peft |
| | tags: |
| | - trl |
| | - sft |
| | - generated_from_trainer |
| | datasets: |
| | - generator |
| | base_model: mistralai/Mistral-7B-Instruct-v0.2 |
| | model-index: |
| | - name: Mixtral_SIG_v2 |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # Mixtral_SIG_v2 |
| |
|
| | This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 0.6608 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 2.5e-05 |
| | - train_batch_size: 16 |
| | - eval_batch_size: 8 |
| | - seed: 42 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: linear |
| | - lr_scheduler_warmup_steps: 0.03 |
| | - training_steps: 300 |
| | |
| | ### Training results |
| | |
| | | Training Loss | Epoch | Step | Validation Loss | |
| | |:-------------:|:-----:|:----:|:---------------:| |
| | | 2.618 | 0.34 | 10 | 2.1689 | |
| | | 1.5656 | 0.69 | 20 | 1.6442 | |
| | | 1.2846 | 1.03 | 30 | 1.4577 | |
| | | 1.144 | 1.38 | 40 | 1.3486 | |
| | | 1.0455 | 1.72 | 50 | 1.2648 | |
| | | 1.0073 | 2.07 | 60 | 1.1995 | |
| | | 0.951 | 2.41 | 70 | 1.1439 | |
| | | 0.9218 | 2.76 | 80 | 1.0959 | |
| | | 0.8527 | 3.1 | 90 | 1.0501 | |
| | | 0.8351 | 3.45 | 100 | 0.9564 | |
| | | 0.7632 | 3.79 | 110 | 0.9155 | |
| | | 0.697 | 4.14 | 120 | 0.8449 | |
| | | 0.6341 | 4.48 | 130 | 0.7953 | |
| | | 0.6212 | 4.83 | 140 | 0.7716 | |
| | | 0.5881 | 5.17 | 150 | 0.7591 | |
| | | 0.5804 | 5.52 | 160 | 0.7433 | |
| | | 0.5694 | 5.86 | 170 | 0.7309 | |
| | | 0.5537 | 6.21 | 180 | 0.7210 | |
| | | 0.5466 | 6.55 | 190 | 0.7129 | |
| | | 0.5272 | 6.9 | 200 | 0.7023 | |
| | | 0.5106 | 7.24 | 210 | 0.6936 | |
| | | 0.5225 | 7.59 | 220 | 0.6842 | |
| | | 0.5062 | 7.93 | 230 | 0.6783 | |
| | | 0.5003 | 8.28 | 240 | 0.6759 | |
| | | 0.4931 | 8.62 | 250 | 0.6712 | |
| | | 0.4828 | 8.97 | 260 | 0.6664 | |
| | | 0.4642 | 9.31 | 270 | 0.6641 | |
| | | 0.5037 | 9.66 | 280 | 0.6616 | |
| | | 0.4674 | 10.0 | 290 | 0.6607 | |
| | | 0.462 | 10.34 | 300 | 0.6608 | |
| | |
| | |
| | ### Framework versions |
| | |
| | - PEFT 0.10.0 |
| | - Transformers 4.37.2 |
| | - Pytorch 2.2.1 |
| | - Datasets 2.18.0 |
| | - Tokenizers 0.15.1 |