| | --- |
| | license: other |
| | base_model: yahma/llama-7b-hf |
| | tags: |
| | - generated_from_trainer |
| | model-index: |
| | - name: V0305P1 |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # V0305P1 |
| |
|
| | This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 0.0697 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 0.0003 |
| | - train_batch_size: 4 |
| | - eval_batch_size: 8 |
| | - seed: 42 |
| | - gradient_accumulation_steps: 32 |
| | - total_train_batch_size: 128 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: cosine_with_restarts |
| | - lr_scheduler_warmup_steps: 20 |
| | - num_epochs: 3 |
| | - mixed_precision_training: Native AMP |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | |
| | |:-------------:|:-----:|:----:|:---------------:| |
| | | 1.0736 | 0.09 | 10 | 0.1558 | |
| | | 0.1607 | 0.17 | 20 | 0.1576 | |
| | | 0.1576 | 0.26 | 30 | 0.1518 | |
| | | 0.1522 | 0.34 | 40 | 0.1504 | |
| | | 0.1506 | 0.43 | 50 | 0.1494 | |
| | | 0.1561 | 0.51 | 60 | 0.1507 | |
| | | 0.1516 | 0.6 | 70 | 0.1495 | |
| | | 0.1528 | 0.68 | 80 | 0.1480 | |
| | | 0.1481 | 0.77 | 90 | 0.1435 | |
| | | 0.1513 | 0.85 | 100 | 0.1445 | |
| | | 0.1463 | 0.94 | 110 | 0.1142 | |
| | | 0.1277 | 1.02 | 120 | 0.1126 | |
| | | 0.119 | 1.11 | 130 | 0.1112 | |
| | | 0.1092 | 1.19 | 140 | 0.0969 | |
| | | 0.1113 | 1.28 | 150 | 0.0965 | |
| | | 0.1033 | 1.37 | 160 | 0.0991 | |
| | | 0.1025 | 1.45 | 170 | 0.0881 | |
| | | 0.0922 | 1.54 | 180 | 0.0878 | |
| | | 0.0931 | 1.62 | 190 | 0.0811 | |
| | | 0.0909 | 1.71 | 200 | 0.0786 | |
| | | 0.087 | 1.79 | 210 | 0.0755 | |
| | | 0.0868 | 1.88 | 220 | 0.0745 | |
| | | 0.0825 | 1.96 | 230 | 0.0832 | |
| | | 0.0636 | 2.05 | 240 | 0.0820 | |
| | | 0.0504 | 2.13 | 250 | 0.0864 | |
| | | 0.0463 | 2.22 | 260 | 0.0876 | |
| | | 0.0449 | 2.3 | 270 | 0.0847 | |
| | | 0.0529 | 2.39 | 280 | 0.0711 | |
| | | 0.0489 | 2.47 | 290 | 0.0693 | |
| | | 0.05 | 2.56 | 300 | 0.0699 | |
| | | 0.0519 | 2.65 | 310 | 0.0686 | |
| | | 0.0411 | 2.73 | 320 | 0.0688 | |
| | | 0.0473 | 2.82 | 330 | 0.0695 | |
| | | 0.0471 | 2.9 | 340 | 0.0697 | |
| | | 0.0452 | 2.99 | 350 | 0.0697 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.36.0.dev0 |
| | - Pytorch 2.1.2+cu121 |
| | - Datasets 2.14.6 |
| | - Tokenizers 0.14.1 |
| |
|