Model save
Browse files- README.md +23 -18
- trainer_log.jsonl +19 -0
README.md
CHANGED
|
@@ -1,36 +1,36 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
library_name: peft
|
| 4 |
tags:
|
| 5 |
-
- llama-factory
|
| 6 |
-
- lora
|
| 7 |
- trl
|
| 8 |
- dpo
|
|
|
|
|
|
|
| 9 |
- generated_from_trainer
|
| 10 |
base_model: google/gemma-7b-it
|
| 11 |
model-index:
|
| 12 |
-
- name:
|
| 13 |
results: []
|
| 14 |
---
|
| 15 |
|
| 16 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 17 |
should probably proofread and complete it, then remove this comment. -->
|
| 18 |
|
| 19 |
-
#
|
| 20 |
|
| 21 |
-
This model is a fine-tuned version of [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) on
|
| 22 |
It achieves the following results on the evaluation set:
|
| 23 |
-
- Loss:
|
| 24 |
-
- Rewards/chosen: -0.
|
| 25 |
-
- Rewards/rejected: -0.
|
| 26 |
-
- Rewards/accuracies: 0.
|
| 27 |
-
- Rewards/margins:
|
| 28 |
-
- Logps/rejected: -
|
| 29 |
-
- Logps/chosen: -
|
| 30 |
-
- Logits/rejected:
|
| 31 |
-
- Logits/chosen:
|
| 32 |
-
- Sft Loss:
|
| 33 |
-
- Odds Ratio Loss: 0.
|
| 34 |
|
| 35 |
## Model description
|
| 36 |
|
|
@@ -58,10 +58,15 @@ The following hyperparameters were used during training:
|
|
| 58 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 59 |
- lr_scheduler_type: cosine
|
| 60 |
- lr_scheduler_warmup_steps: 0.1
|
| 61 |
-
- num_epochs:
|
| 62 |
|
| 63 |
### Training results
|
| 64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
|
| 67 |
### Framework versions
|
|
|
|
| 1 |
---
|
| 2 |
+
license: gemma
|
| 3 |
library_name: peft
|
| 4 |
tags:
|
|
|
|
|
|
|
| 5 |
- trl
|
| 6 |
- dpo
|
| 7 |
+
- llama-factory
|
| 8 |
+
- lora
|
| 9 |
- generated_from_trainer
|
| 10 |
base_model: google/gemma-7b-it
|
| 11 |
model-index:
|
| 12 |
+
- name: Gemma-7B-It-ORPO
|
| 13 |
results: []
|
| 14 |
---
|
| 15 |
|
| 16 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 17 |
should probably proofread and complete it, then remove this comment. -->
|
| 18 |
|
| 19 |
+
# Gemma-7B-It-ORPO
|
| 20 |
|
| 21 |
+
This model is a fine-tuned version of [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) on an unknown dataset.
|
| 22 |
It achieves the following results on the evaluation set:
|
| 23 |
+
- Loss: 1.3471
|
| 24 |
+
- Rewards/chosen: -0.1281
|
| 25 |
+
- Rewards/rejected: -0.1500
|
| 26 |
+
- Rewards/accuracies: 0.5610
|
| 27 |
+
- Rewards/margins: 0.0219
|
| 28 |
+
- Logps/rejected: -1.5004
|
| 29 |
+
- Logps/chosen: -1.2814
|
| 30 |
+
- Logits/rejected: 254.6614
|
| 31 |
+
- Logits/chosen: 254.4679
|
| 32 |
+
- Sft Loss: 1.2814
|
| 33 |
+
- Odds Ratio Loss: 0.6571
|
| 34 |
|
| 35 |
## Model description
|
| 36 |
|
|
|
|
| 58 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 59 |
- lr_scheduler_type: cosine
|
| 60 |
- lr_scheduler_warmup_steps: 0.1
|
| 61 |
+
- num_epochs: 3.0
|
| 62 |
|
| 63 |
### Training results
|
| 64 |
|
| 65 |
+
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss | Odds Ratio Loss |
|
| 66 |
+
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:---------------:|
|
| 67 |
+
| 1.5041 | 0.8891 | 500 | 1.4185 | -0.1352 | -0.1564 | 0.5530 | 0.0212 | -1.5644 | -1.3522 | 250.7549 | 250.6463 | 1.3522 | 0.6626 |
|
| 68 |
+
| 1.428 | 1.7782 | 1000 | 1.3595 | -0.1294 | -0.1509 | 0.5600 | 0.0215 | -1.5091 | -1.2937 | 254.1350 | 253.9581 | 1.2937 | 0.6586 |
|
| 69 |
+
| 1.3302 | 2.6673 | 1500 | 1.3471 | -0.1281 | -0.1500 | 0.5610 | 0.0219 | -1.5004 | -1.2814 | 254.6614 | 254.4679 | 1.2814 | 0.6571 |
|
| 70 |
|
| 71 |
|
| 72 |
### Framework versions
|
trainer_log.jsonl
CHANGED
|
@@ -151,3 +151,22 @@
|
|
| 151 |
{"current_steps": 1490, "total_steps": 1686, "loss": 1.289, "accuracy": 0.59375, "learning_rate": 1.6490167940538343e-07, "epoch": 2.6494776617026004, "percentage": 88.37, "elapsed_time": "5:34:21", "remaining_time": "0:43:58"}
|
| 152 |
{"current_steps": 1500, "total_steps": 1686, "loss": 1.3302, "accuracy": 0.512499988079071, "learning_rate": 1.4866882516191339e-07, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "5:36:38", "remaining_time": "0:41:44"}
|
| 153 |
{"current_steps": 1500, "total_steps": 1686, "eval_loss": 1.347064733505249, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "5:40:15", "remaining_time": "0:42:11"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 151 |
{"current_steps": 1490, "total_steps": 1686, "loss": 1.289, "accuracy": 0.59375, "learning_rate": 1.6490167940538343e-07, "epoch": 2.6494776617026004, "percentage": 88.37, "elapsed_time": "5:34:21", "remaining_time": "0:43:58"}
|
| 152 |
{"current_steps": 1500, "total_steps": 1686, "loss": 1.3302, "accuracy": 0.512499988079071, "learning_rate": 1.4866882516191339e-07, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "5:36:38", "remaining_time": "0:41:44"}
|
| 153 |
{"current_steps": 1500, "total_steps": 1686, "eval_loss": 1.347064733505249, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "5:40:15", "remaining_time": "0:42:11"}
|
| 154 |
+
{"current_steps": 1510, "total_steps": 1686, "loss": 1.3335, "accuracy": 0.606249988079071, "learning_rate": 1.3325243551706057e-07, "epoch": 2.685041120248944, "percentage": 89.56, "elapsed_time": "5:42:29", "remaining_time": "0:39:55"}
|
| 155 |
+
{"current_steps": 1520, "total_steps": 1686, "loss": 1.3537, "accuracy": 0.59375, "learning_rate": 1.1865786358165737e-07, "epoch": 2.702822849522116, "percentage": 90.15, "elapsed_time": "5:44:37", "remaining_time": "0:37:38"}
|
| 156 |
+
{"current_steps": 1530, "total_steps": 1686, "loss": 1.4013, "accuracy": 0.581250011920929, "learning_rate": 1.0489017710262311e-07, "epoch": 2.720604578795288, "percentage": 90.75, "elapsed_time": "5:46:51", "remaining_time": "0:35:21"}
|
| 157 |
+
{"current_steps": 1540, "total_steps": 1686, "loss": 1.3524, "accuracy": 0.53125, "learning_rate": 9.195415670326446e-08, "epoch": 2.73838630806846, "percentage": 91.34, "elapsed_time": "5:49:06", "remaining_time": "0:33:05"}
|
| 158 |
+
{"current_steps": 1550, "total_steps": 1686, "loss": 1.3267, "accuracy": 0.518750011920929, "learning_rate": 7.985429422327384e-08, "epoch": 2.7561680373416317, "percentage": 91.93, "elapsed_time": "5:51:13", "remaining_time": "0:30:49"}
|
| 159 |
+
{"current_steps": 1560, "total_steps": 1686, "loss": 1.425, "accuracy": 0.5562499761581421, "learning_rate": 6.859479115900818e-08, "epoch": 2.773949766614803, "percentage": 92.53, "elapsed_time": "5:53:23", "remaining_time": "0:28:32"}
|
| 160 |
+
{"current_steps": 1570, "total_steps": 1686, "loss": 1.262, "accuracy": 0.5562499761581421, "learning_rate": 5.817955720457902e-08, "epoch": 2.791731495887975, "percentage": 93.12, "elapsed_time": "5:55:33", "remaining_time": "0:26:16"}
|
| 161 |
+
{"current_steps": 1580, "total_steps": 1686, "loss": 1.3469, "accuracy": 0.53125, "learning_rate": 4.861220889427199e-08, "epoch": 2.809513225161147, "percentage": 93.71, "elapsed_time": "5:57:36", "remaining_time": "0:23:59"}
|
| 162 |
+
{"current_steps": 1590, "total_steps": 1686, "loss": 1.3588, "accuracy": 0.612500011920929, "learning_rate": 3.9896068346758074e-08, "epoch": 2.827294954434319, "percentage": 94.31, "elapsed_time": "5:59:45", "remaining_time": "0:21:43"}
|
| 163 |
+
{"current_steps": 1600, "total_steps": 1686, "loss": 1.3344, "accuracy": 0.5, "learning_rate": 3.203416211153832e-08, "epoch": 2.8450766837074903, "percentage": 94.9, "elapsed_time": "6:01:55", "remaining_time": "0:19:27"}
|
| 164 |
+
{"current_steps": 1610, "total_steps": 1686, "loss": 1.3818, "accuracy": 0.543749988079071, "learning_rate": 2.5029220118019393e-08, "epoch": 2.8628584129806622, "percentage": 95.49, "elapsed_time": "6:04:11", "remaining_time": "0:17:11"}
|
| 165 |
+
{"current_steps": 1620, "total_steps": 1686, "loss": 1.2541, "accuracy": 0.6312500238418579, "learning_rate": 1.8883674727586122e-08, "epoch": 2.880640142253834, "percentage": 96.09, "elapsed_time": "6:06:17", "remaining_time": "0:14:55"}
|
| 166 |
+
{"current_steps": 1630, "total_steps": 1686, "loss": 1.3872, "accuracy": 0.4749999940395355, "learning_rate": 1.3599659889000639e-08, "epoch": 2.898421871527006, "percentage": 96.68, "elapsed_time": "6:08:32", "remaining_time": "0:12:39"}
|
| 167 |
+
{"current_steps": 1640, "total_steps": 1686, "loss": 1.3386, "accuracy": 0.5687500238418579, "learning_rate": 9.179010397421528e-09, "epoch": 2.916203600800178, "percentage": 97.27, "elapsed_time": "6:10:56", "remaining_time": "0:10:24"}
|
| 168 |
+
{"current_steps": 1650, "total_steps": 1686, "loss": 1.2157, "accuracy": 0.550000011920929, "learning_rate": 5.623261257296509e-09, "epoch": 2.93398533007335, "percentage": 97.86, "elapsed_time": "6:13:07", "remaining_time": "0:08:08"}
|
| 169 |
+
{"current_steps": 1660, "total_steps": 1686, "loss": 1.3705, "accuracy": 0.5562499761581421, "learning_rate": 2.933647149357122e-09, "epoch": 2.9517670593465217, "percentage": 98.46, "elapsed_time": "6:15:16", "remaining_time": "0:05:52"}
|
| 170 |
+
{"current_steps": 1670, "total_steps": 1686, "loss": 1.3264, "accuracy": 0.48124998807907104, "learning_rate": 1.1111020018930717e-09, "epoch": 2.969548788619693, "percentage": 99.05, "elapsed_time": "6:17:28", "remaining_time": "0:03:36"}
|
| 171 |
+
{"current_steps": 1680, "total_steps": 1686, "loss": 1.2721, "accuracy": 0.6312500238418579, "learning_rate": 1.5625866646051813e-10, "epoch": 2.987330517892865, "percentage": 99.64, "elapsed_time": "6:19:37", "remaining_time": "0:01:21"}
|
| 172 |
+
{"current_steps": 1686, "total_steps": 1686, "epoch": 2.997999555456768, "percentage": 100.0, "elapsed_time": "6:21:02", "remaining_time": "0:00:00"}
|