File size: 4,821 Bytes
7445255
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bd70cc5
7445255
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bd70cc5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7445255
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-base
tags:
- generated_from_trainer
model-index:
- name: lemexp-task1-v3-template_small_nodefs-deepseek-coder-6.7b-base
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# lemexp-task1-v3-template_small_nodefs-deepseek-coder-6.7b-base

This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1322

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch   | Step  | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.2967        | 0.2001  | 720   | 0.1984          |
| 0.1959        | 0.4001  | 1440  | 0.1738          |
| 0.1636        | 0.6002  | 2160  | 0.1623          |
| 0.1539        | 0.8002  | 2880  | 0.1537          |
| 0.1384        | 1.0003  | 3600  | 0.1512          |
| 0.1317        | 1.2003  | 4320  | 0.1433          |
| 0.1237        | 1.4004  | 5040  | 0.1368          |
| 0.1174        | 1.6004  | 5760  | 0.1396          |
| 0.1178        | 1.8005  | 6480  | 0.1316          |
| 0.1134        | 2.0006  | 7200  | 0.1312          |
| 0.1075        | 2.2006  | 7920  | 0.1269          |
| 0.1018        | 2.4007  | 8640  | 0.1254          |
| 0.1018        | 2.6007  | 9360  | 0.1270          |
| 0.0931        | 2.8008  | 10080 | 0.1249          |
| 0.0924        | 3.0008  | 10800 | 0.1218          |
| 0.0897        | 3.2009  | 11520 | 0.1216          |
| 0.0868        | 3.4009  | 12240 | 0.1241          |
| 0.0865        | 3.6010  | 12960 | 0.1148          |
| 0.084         | 3.8011  | 13680 | 0.1159          |
| 0.0815        | 4.0011  | 14400 | 0.1176          |
| 0.0753        | 4.2012  | 15120 | 0.1139          |
| 0.0762        | 4.4012  | 15840 | 0.1140          |
| 0.074         | 4.6013  | 16560 | 0.1131          |
| 0.0732        | 4.8013  | 17280 | 0.1108          |
| 0.0685        | 5.0014  | 18000 | 0.1152          |
| 0.0655        | 5.2014  | 18720 | 0.1140          |
| 0.0664        | 5.4015  | 19440 | 0.1120          |
| 0.0648        | 5.6016  | 20160 | 0.1131          |
| 0.063         | 5.8016  | 20880 | 0.1125          |
| 0.0609        | 6.0017  | 21600 | 0.1141          |
| 0.0576        | 6.2017  | 22320 | 0.1105          |
| 0.0572        | 6.4018  | 23040 | 0.1143          |
| 0.0554        | 6.6018  | 23760 | 0.1115          |
| 0.0538        | 6.8019  | 24480 | 0.1113          |
| 0.052         | 7.0019  | 25200 | 0.1132          |
| 0.0498        | 7.2020  | 25920 | 0.1132          |
| 0.0485        | 7.4021  | 26640 | 0.1115          |
| 0.0483        | 7.6021  | 27360 | 0.1115          |
| 0.0469        | 7.8022  | 28080 | 0.1126          |
| 0.0443        | 8.0022  | 28800 | 0.1134          |
| 0.0421        | 8.2023  | 29520 | 0.1150          |
| 0.0411        | 8.4023  | 30240 | 0.1144          |
| 0.0412        | 8.6024  | 30960 | 0.1117          |
| 0.0391        | 8.8024  | 31680 | 0.1127          |
| 0.0403        | 9.0025  | 32400 | 0.1162          |
| 0.0354        | 9.2026  | 33120 | 0.1193          |
| 0.0354        | 9.4026  | 33840 | 0.1218          |
| 0.0352        | 9.6027  | 34560 | 0.1196          |
| 0.0356        | 9.8027  | 35280 | 0.1236          |
| 0.0331        | 10.0028 | 36000 | 0.1234          |
| 0.032         | 10.2028 | 36720 | 0.1265          |
| 0.0302        | 10.4029 | 37440 | 0.1289          |
| 0.0301        | 10.6029 | 38160 | 0.1280          |
| 0.0295        | 10.8030 | 38880 | 0.1259          |
| 0.028         | 11.0031 | 39600 | 0.1295          |
| 0.0274        | 11.2031 | 40320 | 0.1308          |
| 0.0271        | 11.4032 | 41040 | 0.1309          |
| 0.0267        | 11.6032 | 41760 | 0.1339          |
| 0.0259        | 11.8033 | 42480 | 0.1322          |


### Framework versions

- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.1