lemexp-task1-v3-template_full_nodefs-deepseek-coder-6.7b-base
This model is a fine-tuned version of deepseek-ai/deepseek-coder-6.7b-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0852
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.1813 | 0.2000 | 3114 | 0.1647 |
| 0.1601 | 0.4000 | 6228 | 0.1474 |
| 0.1496 | 0.6000 | 9342 | 0.1388 |
| 0.1441 | 0.8001 | 12456 | 0.1315 |
| 0.1408 | 1.0001 | 15570 | 0.1298 |
| 0.1308 | 1.2001 | 18684 | 0.1230 |
| 0.128 | 1.4001 | 21798 | 0.1255 |
| 0.1281 | 1.6001 | 24912 | 0.1197 |
| 0.1239 | 1.8001 | 28026 | 0.1166 |
| 0.1226 | 2.0001 | 31140 | 0.1182 |
| 0.1136 | 2.2001 | 34254 | 0.1165 |
| 0.1148 | 2.4002 | 37368 | 0.1146 |
| 0.1154 | 2.6002 | 40482 | 0.1091 |
| 0.1142 | 2.8002 | 43596 | 0.1068 |
| 0.1092 | 3.0002 | 46710 | 0.1060 |
| 0.1035 | 3.2002 | 49824 | 0.1081 |
| 0.1045 | 3.4002 | 52938 | 0.1037 |
| 0.1027 | 3.6002 | 56052 | 0.1050 |
| 0.1025 | 3.8002 | 59166 | 0.1022 |
| 0.102 | 4.0003 | 62280 | 0.1009 |
| 0.0943 | 4.2003 | 65394 | 0.0982 |
| 0.0948 | 4.4003 | 68508 | 0.0991 |
| 0.0939 | 4.6003 | 71622 | 0.0965 |
| 0.0938 | 4.8003 | 74736 | 0.0980 |
| 0.093 | 5.0003 | 77850 | 0.0950 |
| 0.0858 | 5.2003 | 80964 | 0.0946 |
| 0.0881 | 5.4003 | 84078 | 0.0965 |
| 0.0883 | 5.6004 | 87192 | 0.0948 |
| 0.0846 | 5.8004 | 90306 | 0.0940 |
| 0.0862 | 6.0004 | 93420 | 0.0927 |
| 0.0786 | 6.2004 | 96534 | 0.0926 |
| 0.0795 | 6.4004 | 99648 | 0.0915 |
| 0.0811 | 6.6004 | 102762 | 0.0901 |
| 0.0783 | 6.8004 | 105876 | 0.0896 |
| 0.0803 | 7.0004 | 108990 | 0.0890 |
| 0.0721 | 7.2005 | 112104 | 0.0893 |
| 0.0707 | 7.4005 | 115218 | 0.0856 |
| 0.0721 | 7.6005 | 118332 | 0.0873 |
| 0.0706 | 7.8005 | 121446 | 0.0856 |
| 0.0705 | 8.0005 | 124560 | 0.0848 |
| 0.0634 | 8.2005 | 127674 | 0.0884 |
| 0.0632 | 8.4005 | 130788 | 0.0848 |
| 0.0653 | 8.6006 | 133902 | 0.0843 |
| 0.0652 | 8.8006 | 137016 | 0.0841 |
| 0.0616 | 9.0006 | 140130 | 0.0825 |
| 0.0543 | 9.2006 | 143244 | 0.0836 |
| 0.055 | 9.4006 | 146358 | 0.0832 |
| 0.055 | 9.6006 | 149472 | 0.0818 |
| 0.0544 | 9.8006 | 152586 | 0.0808 |
| 0.0543 | 10.0006 | 155700 | 0.0794 |
| 0.0473 | 10.2007 | 158814 | 0.0818 |
| 0.0475 | 10.4007 | 161928 | 0.0832 |
| 0.0481 | 10.6007 | 165042 | 0.0831 |
| 0.0466 | 10.8007 | 168156 | 0.0812 |
| 0.0461 | 11.0007 | 171270 | 0.0826 |
| 0.0414 | 11.2007 | 174384 | 0.0856 |
| 0.0417 | 11.4007 | 177498 | 0.0850 |
| 0.0404 | 11.6007 | 180612 | 0.0832 |
| 0.0413 | 11.8008 | 183726 | 0.0852 |
Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.1
- Downloads last month
- 379
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for yalhessi/lemexp-task1-v3-template_full_nodefs-deepseek-coder-6.7b-base
Base model
deepseek-ai/deepseek-coder-6.7b-base