File size: 4,840 Bytes
2223626
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-base
tags:
- generated_from_trainer
model-index:
- name: lemexp-task1-v3-template_small-deepseek-coder-6.7b-base
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# lemexp-task1-v3-template_small-deepseek-coder-6.7b-base

This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1445

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch   | Step  | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.6105        | 0.2001  | 720   | 0.1979          |
| 0.3924        | 0.4002  | 1440  | 0.1762          |
| 0.3264        | 0.6003  | 2160  | 0.1657          |
| 0.3094        | 0.8003  | 2880  | 0.1514          |
| 0.2879        | 1.0003  | 3600  | 0.1473          |
| 0.2563        | 1.2004  | 4320  | 0.1422          |
| 0.2479        | 1.4004  | 5040  | 0.1370          |
| 0.2451        | 1.6005  | 5760  | 0.1394          |
| 0.2413        | 1.8006  | 6480  | 0.1315          |
| 0.2317        | 2.0006  | 7200  | 0.1288          |
| 0.2101        | 2.2006  | 7920  | 0.1282          |
| 0.2039        | 2.4007  | 8640  | 0.1231          |
| 0.2006        | 2.6008  | 9360  | 0.1211          |
| 0.2003        | 2.8009  | 10080 | 0.1172          |
| 0.2044        | 3.0008  | 10800 | 0.1192          |
| 0.1712        | 3.2009  | 11520 | 0.1202          |
| 0.1762        | 3.4010  | 12240 | 0.1164          |
| 0.1732        | 3.6011  | 12960 | 0.1136          |
| 0.1731        | 3.8012  | 13680 | 0.1141          |
| 0.1723        | 4.0011  | 14400 | 0.1121          |
| 0.1466        | 4.2012  | 15120 | 0.1163          |
| 0.1491        | 4.4013  | 15840 | 0.1121          |
| 0.1485        | 4.6014  | 16560 | 0.1144          |
| 0.1515        | 4.8014  | 17280 | 0.1088          |
| 0.1496        | 5.0014  | 18000 | 0.1089          |
| 0.1261        | 5.2015  | 18720 | 0.1125          |
| 0.1281        | 5.4016  | 19440 | 0.1087          |
| 0.1308        | 5.6016  | 20160 | 0.1090          |
| 0.1319        | 5.8017  | 20880 | 0.1106          |
| 0.13          | 6.0017  | 21600 | 0.1058          |
| 0.1119        | 6.2018  | 22320 | 0.1176          |
| 0.1134        | 6.4018  | 23040 | 0.1124          |
| 0.1133        | 6.6019  | 23760 | 0.1133          |
| 0.1141        | 6.8020  | 24480 | 0.1135          |
| 0.1134        | 7.0019  | 25200 | 0.1110          |
| 0.1           | 7.2020  | 25920 | 0.1170          |
| 0.0964        | 7.4021  | 26640 | 0.1099          |
| 0.0986        | 7.6022  | 27360 | 0.1141          |
| 0.0984        | 7.8023  | 28080 | 0.1096          |
| 0.0992        | 8.0022  | 28800 | 0.1101          |
| 0.0832        | 8.2023  | 29520 | 0.1185          |
| 0.0808        | 8.4024  | 30240 | 0.1157          |
| 0.0837        | 8.6025  | 30960 | 0.1191          |
| 0.0845        | 8.8026  | 31680 | 0.1213          |
| 0.0834        | 9.0025  | 32400 | 0.1222          |
| 0.0703        | 9.2026  | 33120 | 0.1285          |
| 0.0714        | 9.4027  | 33840 | 0.1220          |
| 0.0725        | 9.6028  | 34560 | 0.1245          |
| 0.0742        | 9.8028  | 35280 | 0.1253          |
| 0.0709        | 10.0028 | 36000 | 0.1231          |
| 0.0618        | 10.2029 | 36720 | 0.1325          |
| 0.0611        | 10.4029 | 37440 | 0.1318          |
| 0.0621        | 10.6030 | 38160 | 0.1354          |
| 0.062         | 10.8031 | 38880 | 0.1352          |
| 0.063         | 11.0031 | 39600 | 0.1332          |
| 0.0554        | 11.2031 | 40320 | 0.1455          |
| 0.0552        | 11.4032 | 41040 | 0.1436          |
| 0.0546        | 11.6033 | 41760 | 0.1423          |
| 0.0552        | 11.8034 | 42480 | 0.1445          |


### Framework versions

- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.1