nerottt commited on
Commit
36a9401
·
verified ·
1 Parent(s): 2e54700

End of training

Browse files
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.3757
22
 
23
  ## Model description
24
 
@@ -43,15 +43,18 @@ The following hyperparameters were used during training:
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
- - num_epochs: 3
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:-----:|:----:|:---------------:|
52
- | 1.3784 | 1.0 | 179 | 1.3965 |
53
- | 1.435 | 2.0 | 358 | 1.3774 |
54
- | 1.2838 | 3.0 | 537 | 1.3757 |
 
 
 
55
 
56
 
57
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.3627
22
 
23
  ## Model description
24
 
 
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
+ - num_epochs: 6
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:-----:|:----:|:---------------:|
52
+ | 1.3743 | 1.0 | 179 | 1.3920 |
53
+ | 1.424 | 2.0 | 358 | 1.3706 |
54
+ | 1.2688 | 3.0 | 537 | 1.3631 |
55
+ | 1.4132 | 4.0 | 716 | 1.3620 |
56
+ | 1.3061 | 5.0 | 895 | 1.3625 |
57
+ | 1.2414 | 6.0 | 1074 | 1.3627 |
58
 
59
 
60
  ### Framework versions
adapter_config.json CHANGED
@@ -20,10 +20,10 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "q_proj",
24
  "k_proj",
25
- "o_proj",
26
- "v_proj"
 
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
23
  "k_proj",
24
+ "q_proj",
25
+ "v_proj",
26
+ "o_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f5cf9245942ecd7eb126eb65f418e4a3ddcb8c4d3bd12207911a7ec4da1ed8f1
3
  size 109086672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc55edce4b0a74aa81cd5dc036d25ce7d18b1567a2bac25a6792224568f76a89
3
  size 109086672
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e4ef494958d1f6eb9cb07c6235c35e33988d09c7a08a088ba1ce7412876e0202
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f7edd70b81ea4cf67a7c18ebdfcf815bdad0f5d346da96537a7d45865c828f5
3
  size 5496