Heejindo commited on
Commit
6ff1a5b
·
verified ·
1 Parent(s): 37da318
Files changed (3) hide show
  1. README.md +90 -0
  2. generation_config.json +9 -0
  3. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: llama3.2
4
+ base_model: meta-llama/Llama-3.2-1B
5
+ tags:
6
+ - trl
7
+ - sft
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: rationale_model_e15
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # rationale_model_e15
18
+
19
+ This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 2.1070
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
+ - train_batch_size: 8
42
+ - eval_batch_size: 8
43
+ - seed: 42
44
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
+ - lr_scheduler_type: linear
46
+ - num_epochs: 3.0
47
+
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:------:|:-----:|:---------------:|
52
+ | 2.1363 | 0.0954 | 500 | 2.1185 |
53
+ | 1.7868 | 0.1908 | 1000 | 2.1070 |
54
+ | 1.5132 | 0.2862 | 1500 | 2.1743 |
55
+ | 1.238 | 0.3815 | 2000 | 2.2694 |
56
+ | 0.9723 | 0.4769 | 2500 | 2.3214 |
57
+ | 0.7249 | 0.5723 | 3000 | 2.4423 |
58
+ | 0.5657 | 0.6677 | 3500 | 2.5636 |
59
+ | 0.4404 | 0.7631 | 4000 | 2.6851 |
60
+ | 0.3192 | 0.8585 | 4500 | 2.8630 |
61
+ | 0.2676 | 0.9538 | 5000 | 2.9741 |
62
+ | 0.2057 | 1.0492 | 5500 | 3.0958 |
63
+ | 0.1792 | 1.1446 | 6000 | 3.1219 |
64
+ | 0.1691 | 1.2400 | 6500 | 3.1735 |
65
+ | 0.1597 | 1.3354 | 7000 | 3.2299 |
66
+ | 0.1516 | 1.4308 | 7500 | 3.2997 |
67
+ | 0.1422 | 1.5261 | 8000 | 3.2759 |
68
+ | 0.1372 | 1.6215 | 8500 | 3.3557 |
69
+ | 0.1301 | 1.7169 | 9000 | 3.4023 |
70
+ | 0.1229 | 1.8123 | 9500 | 3.4617 |
71
+ | 0.1183 | 1.9077 | 10000 | 3.4668 |
72
+ | 0.1119 | 2.0031 | 10500 | 3.5609 |
73
+ | 0.0924 | 2.0984 | 11000 | 3.5975 |
74
+ | 0.0926 | 2.1938 | 11500 | 3.6429 |
75
+ | 0.089 | 2.2892 | 12000 | 3.6586 |
76
+ | 0.0881 | 2.3846 | 12500 | 3.6920 |
77
+ | 0.0861 | 2.4800 | 13000 | 3.7656 |
78
+ | 0.0835 | 2.5754 | 13500 | 3.7939 |
79
+ | 0.0803 | 2.6707 | 14000 | 3.8398 |
80
+ | 0.0797 | 2.7661 | 14500 | 3.8909 |
81
+ | 0.0774 | 2.8615 | 15000 | 3.9238 |
82
+ | 0.0759 | 2.9569 | 15500 | 3.9394 |
83
+
84
+
85
+ ### Framework versions
86
+
87
+ - Transformers 4.46.3
88
+ - Pytorch 2.3.0
89
+ - Datasets 2.14.4
90
+ - Tokenizers 0.20.3
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 128000,
4
+ "do_sample": true,
5
+ "eos_token_id": 128001,
6
+ "temperature": 0.6,
7
+ "top_p": 0.9,
8
+ "transformers_version": "4.46.3"
9
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9b94406e18315f25215e10b633a7027c435a8fb67218a950e70c7aaa0eb0f543
3
  size 4943274328
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:364b6daf953c4e2d4b47ff91c1e6b06855bd5e212ca9fde3395e4355c5775740
3
  size 4943274328