| | --- |
| | license: apache-2.0 |
| | base_model: mistralai/Mistral-7B-Instruct-v0.1 |
| | tags: |
| | - trl |
| | - sft |
| | - generated_from_trainer |
| | datasets: |
| | - super_glue |
| | metrics: |
| | - accuracy |
| | model-index: |
| | - name: original_glue_boolq |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # original_glue_boolq |
| |
|
| | This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the super_glue dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 0.3297 |
| | - Accuracy: 0.8700 |
| | |
| | ## Model description |
| | |
| | More information needed |
| | |
| | ## Intended uses & limitations |
| | |
| | More information needed |
| | |
| | ## Training and evaluation data |
| | |
| | More information needed |
| | |
| | ## Training procedure |
| | |
| | ### Training hyperparameters |
| | |
| | The following hyperparameters were used during training: |
| | - learning_rate: 1e-05 |
| | - train_batch_size: 2 |
| | - eval_batch_size: 4 |
| | - seed: 2 |
| | - distributed_type: multi-GPU |
| | - num_devices: 2 |
| | - gradient_accumulation_steps: 2 |
| | - total_train_batch_size: 8 |
| | - total_eval_batch_size: 8 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: linear |
| | - num_epochs: 10 |
| | |
| | ### Training results |
| | |
| | | Training Loss | Epoch | Step | Validation Loss | Accuracy | |
| | |:-------------:|:-----:|:----:|:---------------:|:--------:| |
| | | 0.4632 | 0.05 | 50 | 0.4840 | 0.7958 | |
| | | 0.3453 | 0.1 | 100 | 0.3888 | 0.8226 | |
| | | 0.2722 | 0.15 | 150 | 0.3590 | 0.8396 | |
| | | 0.3266 | 0.2 | 200 | 0.3811 | 0.8459 | |
| | | 0.3699 | 0.25 | 250 | 0.3534 | 0.8438 | |
| | | 0.3554 | 0.3 | 300 | 0.3378 | 0.8565 | |
| | | 0.1229 | 0.35 | 350 | 0.3368 | 0.8643 | |
| | | 0.3522 | 0.4 | 400 | 0.3424 | 0.8643 | |
| | | 0.2548 | 0.45 | 450 | 0.3467 | 0.8664 | |
| | | 0.2119 | 0.5 | 500 | 0.3439 | 0.8714 | |
| | | 0.2113 | 0.55 | 550 | 0.3518 | 0.8657 | |
| | | 0.2122 | 0.6 | 600 | 0.3110 | 0.8770 | |
| | | 0.3251 | 0.65 | 650 | 0.3323 | 0.8728 | |
| | | 0.2904 | 0.7 | 700 | 0.3152 | 0.8792 | |
| | | 0.6366 | 0.75 | 750 | 0.3502 | 0.8763 | |
| | | 0.4161 | 0.8 | 800 | 0.3250 | 0.8806 | |
| | | 0.1605 | 0.85 | 850 | 0.3258 | 0.8834 | |
| | | 0.271 | 0.9 | 900 | 0.3330 | 0.8848 | |
| | |
| | |
| | ### Framework versions |
| | |
| | - Transformers 4.35.2 |
| | - Pytorch 2.1.1+cu121 |
| | - Datasets 2.15.0 |
| | - Tokenizers 0.15.0 |
| | |