Model save
Browse files- README.md +24 -24
- adapter_model.safetensors +1 -1
README.md
CHANGED
|
@@ -1,9 +1,9 @@
|
|
| 1 |
---
|
| 2 |
library_name: peft
|
| 3 |
-
license: llama3
|
| 4 |
-
base_model: meta-llama/
|
| 5 |
tags:
|
| 6 |
-
- base_model:adapter:meta-llama/
|
| 7 |
- llama-factory
|
| 8 |
- transformers
|
| 9 |
pipeline_tag: text-generation
|
|
@@ -17,10 +17,10 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 17 |
|
| 18 |
# test
|
| 19 |
|
| 20 |
-
This model is a fine-tuned version of [meta-llama/
|
| 21 |
It achieves the following results on the evaluation set:
|
| 22 |
-
- Loss: 0.
|
| 23 |
-
- Num Input Tokens Seen:
|
| 24 |
|
| 25 |
## Model description
|
| 26 |
|
|
@@ -39,7 +39,7 @@ More information needed
|
|
| 39 |
### Training hyperparameters
|
| 40 |
|
| 41 |
The following hyperparameters were used during training:
|
| 42 |
-
- learning_rate:
|
| 43 |
- train_batch_size: 4
|
| 44 |
- eval_batch_size: 4
|
| 45 |
- seed: 123
|
|
@@ -52,23 +52,23 @@ The following hyperparameters were used during training:
|
|
| 52 |
|
| 53 |
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|
| 54 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
|
| 55 |
-
|
|
| 56 |
-
|
|
| 57 |
-
|
|
| 58 |
-
|
|
| 59 |
-
| 0.
|
| 60 |
-
| 0.
|
| 61 |
-
| 0.
|
| 62 |
-
|
|
| 63 |
-
| 0.
|
| 64 |
-
| 0.
|
| 65 |
-
| 0.
|
| 66 |
-
| 0.
|
| 67 |
-
| 0.
|
| 68 |
-
| 0.
|
| 69 |
-
| 0.
|
| 70 |
-
| 0.
|
| 71 |
-
| 0.
|
| 72 |
|
| 73 |
|
| 74 |
### Framework versions
|
|
|
|
| 1 |
---
|
| 2 |
library_name: peft
|
| 3 |
+
license: llama3.2
|
| 4 |
+
base_model: meta-llama/Llama-3.2-1B-Instruct
|
| 5 |
tags:
|
| 6 |
+
- base_model:adapter:meta-llama/Llama-3.2-1B-Instruct
|
| 7 |
- llama-factory
|
| 8 |
- transformers
|
| 9 |
pipeline_tag: text-generation
|
|
|
|
| 17 |
|
| 18 |
# test
|
| 19 |
|
| 20 |
+
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset.
|
| 21 |
It achieves the following results on the evaluation set:
|
| 22 |
+
- Loss: 0.4413
|
| 23 |
+
- Num Input Tokens Seen: 46944
|
| 24 |
|
| 25 |
## Model description
|
| 26 |
|
|
|
|
| 39 |
### Training hyperparameters
|
| 40 |
|
| 41 |
The following hyperparameters were used during training:
|
| 42 |
+
- learning_rate: 5e-05
|
| 43 |
- train_batch_size: 4
|
| 44 |
- eval_batch_size: 4
|
| 45 |
- seed: 123
|
|
|
|
| 52 |
|
| 53 |
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|
| 54 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
|
| 55 |
+
| 1.0409 | 0.056 | 7 | 0.3513 | 2880 |
|
| 56 |
+
| 0.4086 | 0.112 | 14 | 1.1121 | 5920 |
|
| 57 |
+
| 0.9267 | 0.168 | 21 | 0.3511 | 8416 |
|
| 58 |
+
| 0.7142 | 0.224 | 28 | 0.3859 | 11264 |
|
| 59 |
+
| 0.5983 | 0.28 | 35 | 0.6585 | 13824 |
|
| 60 |
+
| 0.394 | 0.336 | 42 | 0.4126 | 16672 |
|
| 61 |
+
| 0.4533 | 0.392 | 49 | 1.1762 | 19296 |
|
| 62 |
+
| 1.3512 | 0.448 | 56 | 0.8065 | 22432 |
|
| 63 |
+
| 0.7948 | 0.504 | 63 | 1.0268 | 25504 |
|
| 64 |
+
| 0.3463 | 0.56 | 70 | 0.3528 | 28064 |
|
| 65 |
+
| 0.3652 | 0.616 | 77 | 0.3505 | 30720 |
|
| 66 |
+
| 0.3476 | 0.672 | 84 | 0.3471 | 33504 |
|
| 67 |
+
| 0.3395 | 0.728 | 91 | 0.3648 | 36128 |
|
| 68 |
+
| 0.4569 | 0.784 | 98 | 0.3611 | 38592 |
|
| 69 |
+
| 0.3191 | 0.84 | 105 | 0.4305 | 41280 |
|
| 70 |
+
| 0.3951 | 0.896 | 112 | 0.4486 | 44160 |
|
| 71 |
+
| 0.3107 | 0.952 | 119 | 0.4413 | 46944 |
|
| 72 |
|
| 73 |
|
| 74 |
### Framework versions
|
adapter_model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 2818586248
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:434d92bdf22308ada82c55657344041a47d5ad9cd87553d78a484f8fe58ce4ff
|
| 3 |
size 2818586248
|