voidful commited on
Commit
31c7bb9
·
verified ·
1 Parent(s): edc8cb2

Model save

Browse files
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [voidful/llm-codec-abl-ftp](https://huggingface.co/voidful/llm-codec-abl-ftp) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 12.0997
19
 
20
  ## Model description
21
 
@@ -43,13 +43,15 @@ The following hyperparameters were used during training:
43
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
44
  - lr_scheduler_type: cosine
45
  - lr_scheduler_warmup_ratio: 0.1
46
- - num_epochs: 1
47
 
48
  ### Training results
49
 
50
- | Training Loss | Epoch | Step | Validation Loss |
51
- |:-------------:|:-----:|:-----:|:---------------:|
52
- | 12.1107 | 1.0 | 35156 | 12.0997 |
 
 
53
 
54
 
55
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [voidful/llm-codec-abl-ftp](https://huggingface.co/voidful/llm-codec-abl-ftp) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 8.5615
19
 
20
  ## Model description
21
 
 
43
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
44
  - lr_scheduler_type: cosine
45
  - lr_scheduler_warmup_ratio: 0.1
46
+ - num_epochs: 3
47
 
48
  ### Training results
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:-----:|:------:|:---------------:|
52
+ | 8.7126 | 1.0 | 35156 | 8.8477 |
53
+ | 8.3702 | 2.0 | 70312 | 8.6207 |
54
+ | 8.4569 | 3.0 | 105468 | 8.5615 |
55
 
56
 
57
  ### Framework versions
final/adapter_config.json CHANGED
@@ -24,13 +24,13 @@
24
  "rank_pattern": {},
25
  "revision": null,
26
  "target_modules": [
27
- "gate_proj",
28
- "o_proj",
29
- "v_proj",
30
  "k_proj",
 
 
31
  "q_proj",
32
- "down_proj",
33
- "up_proj"
34
  ],
35
  "task_type": "CAUSAL_LM",
36
  "trainable_token_indices": null,
 
24
  "rank_pattern": {},
25
  "revision": null,
26
  "target_modules": [
27
+ "down_proj",
 
 
28
  "k_proj",
29
+ "gate_proj",
30
+ "up_proj",
31
  "q_proj",
32
+ "o_proj",
33
+ "v_proj"
34
  ],
35
  "task_type": "CAUSAL_LM",
36
  "trainable_token_indices": null,
final/adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2d429532f59101c43e978d5922a0d92fdbc2569bdd8a2f5a5c30391076e186bf
3
- size 2501071448
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84905d7b9b77e33ca4fe0d31581a0f1b7e832e9b9f0ffdc1ea10ca0459f70755
3
+ size 528550256
final/added_tokens.json CHANGED
The diff for this file is too large to render. See raw diff
 
final/tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:73e0dd860929d8b8842232d6b15674a5c94b615d457f717f7241735a26aaeffe
3
- size 19162453
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdbbfab204782795ee3986e63016e44007fe18b73404bca7f3093ca7064c59c3
3
+ size 15261883
final/tokenizer_config.json CHANGED
The diff for this file is too large to render. See raw diff
 
final/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9559e073553ed46f87b592d739c58d139e44fc56a226eba21461d47bd539ea88
3
  size 5841
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a3c1485c0affc800608c3cc0df38629b63faf2a204e9401e3510e3eadeb073c
3
  size 5841