rbelanec commited on
Commit
e4760c0
·
verified ·
1 Parent(s): a222b9d

Model save

Browse files
Files changed (2) hide show
  1. README.md +81 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: llama3
4
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
5
+ tags:
6
+ - llama-factory
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: train_multirc_1753094164
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # train_multirc_1753094164
17
+
18
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.1924
21
+ - Num Input Tokens Seen: 132272272
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
+ - train_batch_size: 4
42
+ - eval_batch_size: 4
43
+ - seed: 123
44
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
+ - lr_scheduler_type: cosine
46
+ - lr_scheduler_warmup_ratio: 0.1
47
+ - num_epochs: 10.0
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
52
+ |:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
53
+ | 0.2959 | 0.5 | 3065 | 0.1770 | 6639424 |
54
+ | 0.1984 | 1.0 | 6130 | 0.1572 | 13255424 |
55
+ | 0.0889 | 1.5 | 9195 | 0.1520 | 19871232 |
56
+ | 0.1146 | 2.0 | 12260 | 0.1492 | 26471216 |
57
+ | 0.1268 | 2.5 | 15325 | 0.1545 | 33075856 |
58
+ | 0.1007 | 3.0 | 18390 | 0.1629 | 39694112 |
59
+ | 0.1521 | 3.5 | 21455 | 0.1603 | 46313216 |
60
+ | 0.006 | 4.0 | 24520 | 0.1501 | 52929744 |
61
+ | 0.3003 | 4.5 | 27585 | 0.1589 | 59549072 |
62
+ | 0.1177 | 5.0 | 30650 | 0.1592 | 66152480 |
63
+ | 0.0486 | 5.5 | 33715 | 0.1672 | 72765696 |
64
+ | 0.0755 | 6.0 | 36780 | 0.1772 | 79389648 |
65
+ | 0.0772 | 6.5 | 39845 | 0.1912 | 86008784 |
66
+ | 0.0286 | 7.0 | 42910 | 0.1884 | 92621824 |
67
+ | 0.1522 | 7.5 | 45975 | 0.1887 | 99237152 |
68
+ | 0.0034 | 8.0 | 49040 | 0.1856 | 105830544 |
69
+ | 0.0042 | 8.5 | 52105 | 0.1977 | 112458064 |
70
+ | 0.0036 | 9.0 | 55170 | 0.1930 | 119047920 |
71
+ | 0.1249 | 9.5 | 58235 | 0.1925 | 125686064 |
72
+ | 0.0941 | 10.0 | 61300 | 0.1924 | 132272272 |
73
+
74
+
75
+ ### Framework versions
76
+
77
+ - PEFT 0.15.2
78
+ - Transformers 4.51.3
79
+ - Pytorch 2.7.1+cu126
80
+ - Datasets 3.6.0
81
+ - Tokenizers 0.21.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e511afeb590263c7fb1d1b97ccc7239259e7e9cbdee35e7855dcdd0a3b88eb92
3
  size 1074144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6bbb2f1d40abde92a12c17ff0959f9c06760d051b524c86796d4f50af850d9a
3
  size 1074144