tuandunghcmut commited on
Commit
fde92f6
·
verified ·
1 Parent(s): dac8935

Best model at step 0 with eval_loss=1.6147 | WandB Run: https://wandb.ai/tuandung/Qwen2.5-Coder-1.5B-Instruct-LoRA-Training/runs/62a2gru6

Browse files
Files changed (2) hide show
  1. README.md +7 -7
  2. training_args.bin +1 -1
README.md CHANGED
@@ -17,10 +17,10 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - eval_loss: 1.6159
21
- - eval_runtime: 32.7693
22
- - eval_samples_per_second: 3.662
23
- - eval_steps_per_second: 0.458
24
  - epoch: 0
25
  - step: 0
26
 
@@ -42,15 +42,15 @@ More information needed
42
 
43
  The following hyperparameters were used during training:
44
  - learning_rate: 5e-05
45
- - train_batch_size: 12
46
  - eval_batch_size: 8
47
  - seed: 42
48
  - gradient_accumulation_steps: 2
49
- - total_train_batch_size: 24
50
  - optimizer: Use lion_8bit and the args are:
51
  No additional optimizer arguments
52
  - lr_scheduler_type: cosine
53
- - lr_scheduler_warmup_steps: 42
54
  - num_epochs: 3
55
  - mixed_precision_training: Native AMP
56
 
 
17
 
18
  This model is a fine-tuned version of [unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - eval_loss: 1.6147
21
+ - eval_runtime: 25.1219
22
+ - eval_samples_per_second: 3.583
23
+ - eval_steps_per_second: 0.478
24
  - epoch: 0
25
  - step: 0
26
 
 
42
 
43
  The following hyperparameters were used during training:
44
  - learning_rate: 5e-05
45
+ - train_batch_size: 20
46
  - eval_batch_size: 8
47
  - seed: 42
48
  - gradient_accumulation_steps: 2
49
+ - total_train_batch_size: 40
50
  - optimizer: Use lion_8bit and the args are:
51
  No additional optimizer arguments
52
  - lr_scheduler_type: cosine
53
+ - lr_scheduler_warmup_steps: 25
54
  - num_epochs: 3
55
  - mixed_precision_training: Native AMP
56
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f73c30d15ea4819f2b7cd3587bf9dab490e03594356008e3dd6f0855a750ca4
3
  size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9878c99c5737dc3a0e54256c19b7bb707f7cbb263d22c0e46db557fed8635366
3
  size 5560