File size: 3,850 Bytes
6bca7b9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: lc_random_repeat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lc_random_repeat
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4379 | 1.0 | 179 | 1.4131 |
| 1.4264 | 2.0 | 358 | 1.3916 |
| 1.3008 | 3.0 | 537 | 1.3836 |
| 1.3527 | 4.0 | 716 | 1.3795 |
| 1.3232 | 5.0 | 895 | 1.3847 |
| 1.123 | 6.0 | 1074 | 1.3991 |
| 1.2001 | 7.0 | 1253 | 1.3956 |
| 1.2633 | 8.0 | 1432 | 1.4122 |
| 1.246 | 9.0 | 1611 | 1.4136 |
| 1.1377 | 10.0 | 1790 | 1.4250 |
| 1.1705 | 11.0 | 1969 | 1.4381 |
| 1.1582 | 12.0 | 2148 | 1.4502 |
| 1.1204 | 13.0 | 2327 | 1.4652 |
| 1.164 | 14.0 | 2506 | 1.4837 |
| 1.1277 | 15.0 | 2685 | 1.4842 |
| 1.0011 | 16.0 | 2864 | 1.5123 |
| 1.018 | 17.0 | 3043 | 1.5116 |
| 1.0135 | 18.0 | 3222 | 1.5332 |
| 1.0115 | 19.0 | 3401 | 1.5386 |
| 0.9707 | 20.0 | 3580 | 1.5623 |
| 1.105 | 21.0 | 3759 | 1.5591 |
| 1.0399 | 22.0 | 3938 | 1.5469 |
| 1.0203 | 23.0 | 4117 | 1.5725 |
| 1.0629 | 24.0 | 4296 | 1.5643 |
| 0.9934 | 25.0 | 4475 | 1.5698 |
| 0.9231 | 26.0 | 4654 | 1.5924 |
| 0.9423 | 27.0 | 4833 | 1.5916 |
| 1.0153 | 28.0 | 5012 | 1.5976 |
| 0.9623 | 29.0 | 5191 | 1.6060 |
| 0.796 | 30.0 | 5370 | 1.6051 |
| 0.9592 | 31.0 | 5549 | 1.6067 |
| 0.9507 | 32.0 | 5728 | 1.6060 |
| 0.7748 | 33.0 | 5907 | 1.6183 |
| 1.0306 | 34.0 | 6086 | 1.6126 |
| 0.8095 | 35.0 | 6265 | 1.6120 |
| 0.8423 | 36.0 | 6444 | 1.6163 |
| 0.8873 | 37.0 | 6623 | 1.6162 |
| 0.9068 | 38.0 | 6802 | 1.6177 |
| 0.9126 | 39.0 | 6981 | 1.6182 |
| 0.948 | 40.0 | 7160 | 1.6169 |
| 0.8804 | 41.0 | 7339 | 1.6188 |
| 0.8854 | 42.0 | 7518 | 1.6185 |
| 0.8272 | 43.0 | 7697 | 1.6170 |
| 0.9812 | 44.0 | 7876 | 1.6179 |
| 0.8171 | 45.0 | 8055 | 1.6183 |
| 0.8417 | 46.0 | 8234 | 1.6177 |
| 0.9621 | 47.0 | 8413 | 1.6188 |
| 0.9239 | 48.0 | 8592 | 1.6179 |
| 0.9687 | 49.0 | 8771 | 1.6189 |
| 0.8301 | 50.0 | 8950 | 1.6186 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.0+cu118
- Datasets 2.19.2
- Tokenizers 0.19.1 |