End of training
Browse files- README.md +16 -0
- adapter_config.json +1 -1
- adapter_model.safetensors +1 -1
- metrics.jsonl +5 -5
- metrics_epoch_0.96_fold_0_lr_1e-05_seed_42_weight_2.0.json +1 -1
- metrics_epoch_2.96_fold_0_lr_1e-05_seed_42_weight_2.0.json +1 -1
- metrics_epoch_4.0_fold_0_lr_1e-05_seed_42_weight_2.0.json +1 -1
- metrics_epoch_4.96_fold_0_lr_1e-05_seed_42_weight_2.0.json +1 -1
- metrics_epoch_5.76_fold_0_lr_1e-05_seed_42_weight_2.0.json +1 -1
- results_epoch_0.96_fold_0_lr_1e-05_seed_42_weight_2.0.json +0 -0
- results_epoch_2.0_fold_0_lr_1e-05_seed_42_weight_2.0.json +0 -0
- results_epoch_2.96_fold_0_lr_1e-05_seed_42_weight_2.0.json +0 -0
- results_epoch_4.0_fold_0_lr_1e-05_seed_42_weight_2.0.json +0 -0
- results_epoch_4.96_fold_0_lr_1e-05_seed_42_weight_2.0.json +0 -0
- results_epoch_5.76_fold_0_lr_1e-05_seed_42_weight_2.0.json +0 -0
- training_args.bin +2 -2
README.md
CHANGED
|
@@ -17,6 +17,14 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 17 |
# llama3_false_positives_1101_KTO_optimised_model
|
| 18 |
|
| 19 |
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
## Model description
|
| 22 |
|
|
@@ -48,6 +56,14 @@ The following hyperparameters were used during training:
|
|
| 48 |
|
| 49 |
### Training results
|
| 50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
|
| 53 |
### Framework versions
|
|
|
|
| 17 |
# llama3_false_positives_1101_KTO_optimised_model
|
| 18 |
|
| 19 |
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
|
| 20 |
+
It achieves the following results on the evaluation set:
|
| 21 |
+
- Loss: 0.5288
|
| 22 |
+
- Rewards/chosen: 0.5981
|
| 23 |
+
- Logps/chosen: -45.9608
|
| 24 |
+
- Rewards/rejected: -0.3258
|
| 25 |
+
- Logps/rejected: -56.8765
|
| 26 |
+
- Rewards/margins: 0.9238
|
| 27 |
+
- Kl: 0.0439
|
| 28 |
|
| 29 |
## Model description
|
| 30 |
|
|
|
|
| 56 |
|
| 57 |
### Training results
|
| 58 |
|
| 59 |
+
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Logps/chosen | Rewards/rejected | Logps/rejected | Rewards/margins | Kl |
|
| 60 |
+
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:------------:|:----------------:|:--------------:|:---------------:|:------:|
|
| 61 |
+
| 0.4886 | 0.96 | 12 | 0.6709 | 0.1420 | -50.5211 | 0.0978 | -52.6405 | 0.0442 | 0.1381 |
|
| 62 |
+
| 0.5124 | 2.0 | 25 | 0.5966 | 0.3060 | -48.8816 | -0.1760 | -55.3786 | 0.4819 | 0.0375 |
|
| 63 |
+
| 0.3647 | 2.96 | 37 | 0.5552 | 0.4908 | -47.0331 | -0.2555 | -56.1745 | 0.7464 | 0.0447 |
|
| 64 |
+
| 0.3725 | 4.0 | 50 | 0.5239 | 0.5506 | -46.4352 | -0.3964 | -57.5829 | 0.9470 | 0.0317 |
|
| 65 |
+
| 0.3249 | 4.96 | 62 | 0.5300 | 0.5839 | -46.1019 | -0.3309 | -56.9280 | 0.9148 | 0.0409 |
|
| 66 |
+
| 0.3262 | 5.76 | 72 | 0.5288 | 0.5981 | -45.9608 | -0.3258 | -56.8765 | 0.9238 | 0.0439 |
|
| 67 |
|
| 68 |
|
| 69 |
### Framework versions
|
adapter_config.json
CHANGED
|
@@ -20,8 +20,8 @@
|
|
| 20 |
"rank_pattern": {},
|
| 21 |
"revision": null,
|
| 22 |
"target_modules": [
|
| 23 |
-
"k_proj",
|
| 24 |
"q_proj",
|
|
|
|
| 25 |
"v_proj",
|
| 26 |
"o_proj"
|
| 27 |
],
|
|
|
|
| 20 |
"rank_pattern": {},
|
| 21 |
"revision": null,
|
| 22 |
"target_modules": [
|
|
|
|
| 23 |
"q_proj",
|
| 24 |
+
"k_proj",
|
| 25 |
"v_proj",
|
| 26 |
"o_proj"
|
| 27 |
],
|
adapter_model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 27297544
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8e4738ee6e1f67ef060f8c6f99326f9e014ad020aa305af94c963aa81ade0029
|
| 3 |
size 27297544
|
metrics.jsonl
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
-
{"epoch": 0.96, "precision": 0.7142857040816327, "recall": 0.
|
| 2 |
{"epoch": 2.0, "precision": 0.9999999666666678, "recall": 0.5999999880000002, "fold": 0}
|
| 3 |
-
{"epoch": 2.96, "precision": 0.9999999666666678, "recall": 0.
|
| 4 |
-
{"epoch": 4.0, "precision": 0.
|
| 5 |
-
{"epoch": 4.96, "precision": 0.
|
| 6 |
-
{"epoch": 5.76, "precision": 0.
|
|
|
|
| 1 |
+
{"epoch": 0.96, "precision": 0.7142857040816327, "recall": 0.8333333194444447, "fold": 0}
|
| 2 |
{"epoch": 2.0, "precision": 0.9999999666666678, "recall": 0.5999999880000002, "fold": 0}
|
| 3 |
+
{"epoch": 2.96, "precision": 0.9999999666666678, "recall": 0.7499999812500004, "fold": 0}
|
| 4 |
+
{"epoch": 4.0, "precision": 0.9999999500000026, "recall": 0.4999999875000003, "fold": 0}
|
| 5 |
+
{"epoch": 4.96, "precision": 0.9999999000000099, "recall": 0.24999999375000015, "fold": 0}
|
| 6 |
+
{"epoch": 5.76, "precision": 0.9999999500000026, "recall": 0.39999999200000014, "fold": 0}
|
metrics_epoch_0.96_fold_0_lr_1e-05_seed_42_weight_2.0.json
CHANGED
|
@@ -1 +1 @@
|
|
| 1 |
-
{"epoch": 0.96, "precision": 0.7142857040816327, "recall": 0.
|
|
|
|
| 1 |
+
{"epoch": 0.96, "precision": 0.7142857040816327, "recall": 0.8333333194444447, "fold": 0}
|
metrics_epoch_2.96_fold_0_lr_1e-05_seed_42_weight_2.0.json
CHANGED
|
@@ -1 +1 @@
|
|
| 1 |
-
{"epoch": 2.96, "precision": 0.9999999666666678, "recall": 0.
|
|
|
|
| 1 |
+
{"epoch": 2.96, "precision": 0.9999999666666678, "recall": 0.7499999812500004, "fold": 0}
|
metrics_epoch_4.0_fold_0_lr_1e-05_seed_42_weight_2.0.json
CHANGED
|
@@ -1 +1 @@
|
|
| 1 |
-
{"epoch": 4.0, "precision": 0.
|
|
|
|
| 1 |
+
{"epoch": 4.0, "precision": 0.9999999500000026, "recall": 0.4999999875000003, "fold": 0}
|
metrics_epoch_4.96_fold_0_lr_1e-05_seed_42_weight_2.0.json
CHANGED
|
@@ -1 +1 @@
|
|
| 1 |
-
{"epoch": 4.96, "precision": 0.
|
|
|
|
| 1 |
+
{"epoch": 4.96, "precision": 0.9999999000000099, "recall": 0.24999999375000015, "fold": 0}
|
metrics_epoch_5.76_fold_0_lr_1e-05_seed_42_weight_2.0.json
CHANGED
|
@@ -1 +1 @@
|
|
| 1 |
-
{"epoch": 5.76, "precision": 0.
|
|
|
|
| 1 |
+
{"epoch": 5.76, "precision": 0.9999999500000026, "recall": 0.39999999200000014, "fold": 0}
|
results_epoch_0.96_fold_0_lr_1e-05_seed_42_weight_2.0.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
results_epoch_2.0_fold_0_lr_1e-05_seed_42_weight_2.0.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
results_epoch_2.96_fold_0_lr_1e-05_seed_42_weight_2.0.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
results_epoch_4.0_fold_0_lr_1e-05_seed_42_weight_2.0.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
results_epoch_4.96_fold_0_lr_1e-05_seed_42_weight_2.0.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
results_epoch_5.76_fold_0_lr_1e-05_seed_42_weight_2.0.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4141f8c2c200a30b85d1dce107a6769a026fa17850ece625ace19de9dfa46f0e
|
| 3 |
+
size 5304
|