File size: 3,067 Bytes
8b6e639
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
datasets:
- generator
library_name: peft
license: apache-2.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: mistral_7b_cosine_lr_2e-4_bs2
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# mistral_7b_cosine_lr_2e-4_bs2

This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3819

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- lr_scheduler_warmup_steps: 15
- num_epochs: 4

### Training results

| Training Loss | Epoch  | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.5854        | 0.0366 | 10   | 0.7044          |
| 0.6542        | 0.0732 | 20   | 0.8683          |
| 0.5736        | 0.1098 | 30   | 0.5023          |
| 0.4886        | 0.1465 | 40   | 0.4735          |
| 0.4757        | 0.1831 | 50   | 0.4552          |
| 0.453         | 0.2197 | 60   | 0.4451          |
| 0.4494        | 0.2563 | 70   | 0.4380          |
| 0.4457        | 0.2929 | 80   | 0.4329          |
| 0.4353        | 0.3295 | 90   | 0.4271          |
| 0.434         | 0.3661 | 100  | 0.4239          |
| 0.4307        | 0.4027 | 110  | 0.4198          |
| 0.4256        | 0.4394 | 120  | 0.4167          |
| 0.4173        | 0.4760 | 130  | 0.4130          |
| 0.4195        | 0.5126 | 140  | 0.4100          |
| 0.4159        | 0.5492 | 150  | 0.4075          |
| 0.4102        | 0.5858 | 160  | 0.4045          |
| 0.4135        | 0.6224 | 170  | 0.4034          |
| 0.408         | 0.6590 | 180  | 0.4004          |
| 0.405         | 0.6957 | 190  | 0.3992          |
| 0.4053        | 0.7323 | 200  | 0.3960          |
| 0.3994        | 0.7689 | 210  | 0.3934          |
| 0.3968        | 0.8055 | 220  | 0.3914          |
| 0.3966        | 0.8421 | 230  | 0.3885          |
| 0.3894        | 0.8787 | 240  | 0.3868          |
| 0.3896        | 0.9153 | 250  | 0.3860          |
| 0.3939        | 0.9519 | 260  | 0.3836          |
| 0.387         | 0.9886 | 270  | 0.3818          |
| 0.3511        | 1.0252 | 280  | 0.3839          |
| 0.3316        | 1.0618 | 290  | 0.3834          |
| 0.3281        | 1.0984 | 300  | 0.3819          |


### Framework versions

- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0