|
|
--- |
|
|
library_name: peft |
|
|
license: llama3 |
|
|
base_model: meta-llama/Meta-Llama-3-8B-Instruct |
|
|
tags: |
|
|
- llama-factory |
|
|
- prompt-tuning |
|
|
- generated_from_trainer |
|
|
model-index: |
|
|
- name: train_multirc_1752870507 |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# train_multirc_1752870507 |
|
|
|
|
|
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the multirc dataset. |
|
|
It achieves the following results on the evaluation set: |
|
|
- Loss: 0.2630 |
|
|
- Num Input Tokens Seen: 132272272 |
|
|
|
|
|
## Model description |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training and evaluation data |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training procedure |
|
|
|
|
|
### Training hyperparameters |
|
|
|
|
|
The following hyperparameters were used during training: |
|
|
- learning_rate: 5e-05 |
|
|
- train_batch_size: 4 |
|
|
- eval_batch_size: 4 |
|
|
- seed: 123 |
|
|
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
|
|
- lr_scheduler_type: cosine |
|
|
- lr_scheduler_warmup_ratio: 0.1 |
|
|
- num_epochs: 10.0 |
|
|
|
|
|
### Training results |
|
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |
|
|
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:| |
|
|
| 0.419 | 0.5 | 3065 | 0.4716 | 6639424 | |
|
|
| 0.33 | 1.0 | 6130 | 0.3514 | 13255424 | |
|
|
| 0.3554 | 1.5 | 9195 | 0.3328 | 19871232 | |
|
|
| 0.3902 | 2.0 | 12260 | 0.3273 | 26471216 | |
|
|
| 0.284 | 2.5 | 15325 | 0.3223 | 33075856 | |
|
|
| 0.3489 | 3.0 | 18390 | 0.3199 | 39694112 | |
|
|
| 0.3676 | 3.5 | 21455 | 0.3093 | 46313216 | |
|
|
| 0.2973 | 4.0 | 24520 | 0.3019 | 52929744 | |
|
|
| 0.3558 | 4.5 | 27585 | 0.2950 | 59549072 | |
|
|
| 0.3241 | 5.0 | 30650 | 0.2869 | 66152480 | |
|
|
| 0.2652 | 5.5 | 33715 | 0.2818 | 72765696 | |
|
|
| 0.2135 | 6.0 | 36780 | 0.2760 | 79389648 | |
|
|
| 0.2174 | 6.5 | 39845 | 0.2734 | 86008784 | |
|
|
| 0.2564 | 7.0 | 42910 | 0.2696 | 92621824 | |
|
|
| 0.3691 | 7.5 | 45975 | 0.2669 | 99237152 | |
|
|
| 0.2795 | 8.0 | 49040 | 0.2646 | 105830544 | |
|
|
| 0.2912 | 8.5 | 52105 | 0.2639 | 112458064 | |
|
|
| 0.2035 | 9.0 | 55170 | 0.2632 | 119047920 | |
|
|
| 0.2487 | 9.5 | 58235 | 0.2633 | 125686064 | |
|
|
| 0.2366 | 10.0 | 61300 | 0.2630 | 132272272 | |
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- PEFT 0.15.2 |
|
|
- Transformers 4.51.3 |
|
|
- Pytorch 2.7.1+cu126 |
|
|
- Datasets 3.6.0 |
|
|
- Tokenizers 0.21.1 |