SolaceinLoneSun commited on
Commit
ddad796
·
verified ·
1 Parent(s): 1086a35

Model save

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -15,6 +15,8 @@ should probably proofread and complete it, then remove this comment. -->
15
  # SolaceAI
16
 
17
  This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the None dataset.
 
 
18
 
19
  ## Model description
20
 
@@ -42,9 +44,18 @@ The following hyperparameters were used during training:
42
  - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 2
45
- - training_steps: 4000
46
  - mixed_precision_training: Native AMP
47
 
 
 
 
 
 
 
 
 
 
48
  ### Framework versions
49
 
50
  - PEFT 0.14.0
 
15
  # SolaceAI
16
 
17
  This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 3.0826
20
 
21
  ## Model description
22
 
 
44
  - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
+ - training_steps: 3000
48
  - mixed_precision_training: Native AMP
49
 
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:------:|:----:|:---------------:|
54
+ | 12.5314 | 0.1692 | 1000 | 3.1412 |
55
+ | 12.3003 | 0.3385 | 2000 | 3.0979 |
56
+ | 12.4202 | 0.5077 | 3000 | 3.0826 |
57
+
58
+
59
  ### Framework versions
60
 
61
  - PEFT 0.14.0