File size: 4,979 Bytes
79a297a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-base
tags:
- base_model:adapter:deepseek-ai/deepseek-coder-6.7b-base
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: lemexp-task1-v3-lemma_object_full_nodefs-deepseek-coder-6.7b-base
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# lemexp-task1-v3-lemma_object_full_nodefs-deepseek-coder-6.7b-base

This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1302

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch   | Step   | Validation Loss |
|:-------------:|:-------:|:------:|:---------------:|
| 0.3108        | 0.2000  | 3114   | 0.3017          |
| 0.2685        | 0.4000  | 6228   | 0.2551          |
| 0.2493        | 0.6000  | 9342   | 0.2313          |
| 0.2335        | 0.8001  | 12456  | 0.2242          |
| 0.2223        | 1.0001  | 15570  | 0.2162          |
| 0.2042        | 1.2001  | 18684  | 0.2044          |
| 0.1984        | 1.4001  | 21798  | 0.1986          |
| 0.1993        | 1.6001  | 24912  | 0.1940          |
| 0.1931        | 1.8001  | 28026  | 0.1886          |
| 0.1887        | 2.0001  | 31140  | 0.1876          |
| 0.1712        | 2.2001  | 34254  | 0.1830          |
| 0.1751        | 2.4002  | 37368  | 0.1793          |
| 0.1714        | 2.6002  | 40482  | 0.1794          |
| 0.1736        | 2.8002  | 43596  | 0.1795          |
| 0.168         | 3.0002  | 46710  | 0.1722          |
| 0.1564        | 3.2002  | 49824  | 0.1717          |
| 0.156         | 3.4002  | 52938  | 0.1695          |
| 0.154         | 3.6002  | 56052  | 0.1673          |
| 0.1538        | 3.8002  | 59166  | 0.1666          |
| 0.1539        | 4.0003  | 62280  | 0.1657          |
| 0.1388        | 4.2003  | 65394  | 0.1624          |
| 0.1406        | 4.4003  | 68508  | 0.1634          |
| 0.1379        | 4.6003  | 71622  | 0.1573          |
| 0.1407        | 4.8003  | 74736  | 0.1580          |
| 0.1395        | 5.0003  | 77850  | 0.1577          |
| 0.1245        | 5.2003  | 80964  | 0.1550          |
| 0.1286        | 5.4003  | 84078  | 0.1559          |
| 0.1283        | 5.6004  | 87192  | 0.1521          |
| 0.1254        | 5.8004  | 90306  | 0.1480          |
| 0.1254        | 6.0004  | 93420  | 0.1445          |
| 0.1129        | 6.2004  | 96534  | 0.1441          |
| 0.1146        | 6.4004  | 99648  | 0.1441          |
| 0.1166        | 6.6004  | 102762 | 0.1420          |
| 0.1159        | 6.8004  | 105876 | 0.1436          |
| 0.1155        | 7.0004  | 108990 | 0.1407          |
| 0.1012        | 7.2005  | 112104 | 0.1419          |
| 0.0994        | 7.4005  | 115218 | 0.1419          |
| 0.1012        | 7.6005  | 118332 | 0.1383          |
| 0.1027        | 7.8005  | 121446 | 0.1381          |
| 0.1014        | 8.0005  | 124560 | 0.1354          |
| 0.0896        | 8.2005  | 127674 | 0.1385          |
| 0.0893        | 8.4005  | 130788 | 0.1377          |
| 0.0918        | 8.6006  | 133902 | 0.1349          |
| 0.0915        | 8.8006  | 137016 | 0.1312          |
| 0.0878        | 9.0006  | 140130 | 0.1323          |
| 0.0757        | 9.2006  | 143244 | 0.1349          |
| 0.0772        | 9.4006  | 146358 | 0.1326          |
| 0.0779        | 9.6006  | 149472 | 0.1308          |
| 0.0757        | 9.8006  | 152586 | 0.1293          |
| 0.0768        | 10.0006 | 155700 | 0.1298          |
| 0.0661        | 10.2007 | 158814 | 0.1355          |
| 0.0648        | 10.4007 | 161928 | 0.1330          |
| 0.0661        | 10.6007 | 165042 | 0.1313          |
| 0.0643        | 10.8007 | 168156 | 0.1283          |
| 0.0627        | 11.0007 | 171270 | 0.1300          |
| 0.0567        | 11.2007 | 174384 | 0.1335          |
| 0.0569        | 11.4007 | 177498 | 0.1327          |
| 0.0553        | 11.6007 | 180612 | 0.1323          |
| 0.0558        | 11.8008 | 183726 | 0.1302          |


### Framework versions

- PEFT 0.17.1
- Transformers 4.55.4
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4