| | --- |
| | license: other |
| | base_model: yahma/llama-7b-hf |
| | tags: |
| | - generated_from_trainer |
| | model-index: |
| | - name: V0305P7 |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # V0305P7 |
| |
|
| | This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 0.0720 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 0.0003 |
| | - train_batch_size: 4 |
| | - eval_batch_size: 8 |
| | - seed: 42 |
| | - gradient_accumulation_steps: 32 |
| | - total_train_batch_size: 128 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: cosine_with_restarts |
| | - lr_scheduler_warmup_steps: 20 |
| | - num_epochs: 3 |
| | - mixed_precision_training: Native AMP |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | |
| | |:-------------:|:-----:|:----:|:---------------:| |
| | | 1.6031 | 0.09 | 10 | 0.2267 | |
| | | 0.172 | 0.17 | 20 | 0.1521 | |
| | | 0.1569 | 0.26 | 30 | 0.1541 | |
| | | 0.1515 | 0.34 | 40 | 0.1571 | |
| | | 0.1519 | 0.43 | 50 | 0.1508 | |
| | | 0.1565 | 0.51 | 60 | 0.1499 | |
| | | 0.1515 | 0.6 | 70 | 0.1501 | |
| | | 0.152 | 0.68 | 80 | 0.1409 | |
| | | 0.1413 | 0.77 | 90 | 0.1359 | |
| | | 0.1303 | 0.85 | 100 | 0.1025 | |
| | | 0.1181 | 0.94 | 110 | 0.0935 | |
| | | 0.1178 | 1.02 | 120 | 0.0946 | |
| | | 0.1043 | 1.11 | 130 | 0.0963 | |
| | | 0.0955 | 1.19 | 140 | 0.0936 | |
| | | 0.0948 | 1.28 | 150 | 0.0856 | |
| | | 0.095 | 1.37 | 160 | 0.0802 | |
| | | 0.0941 | 1.45 | 170 | 0.0779 | |
| | | 0.0863 | 1.54 | 180 | 0.0821 | |
| | | 0.0868 | 1.62 | 190 | 0.0802 | |
| | | 0.0904 | 1.71 | 200 | 0.0768 | |
| | | 0.0893 | 1.79 | 210 | 0.0783 | |
| | | 0.0851 | 1.88 | 220 | 0.0732 | |
| | | 0.079 | 1.96 | 230 | 0.0763 | |
| | | 0.0666 | 2.05 | 240 | 0.0793 | |
| | | 0.0498 | 2.13 | 250 | 0.0794 | |
| | | 0.05 | 2.22 | 260 | 0.0777 | |
| | | 0.0479 | 2.3 | 270 | 0.0795 | |
| | | 0.0543 | 2.39 | 280 | 0.0723 | |
| | | 0.0544 | 2.47 | 290 | 0.0702 | |
| | | 0.0534 | 2.56 | 300 | 0.0703 | |
| | | 0.0544 | 2.65 | 310 | 0.0704 | |
| | | 0.048 | 2.73 | 320 | 0.0737 | |
| | | 0.0499 | 2.82 | 330 | 0.0736 | |
| | | 0.0525 | 2.9 | 340 | 0.0718 | |
| | | 0.0502 | 2.99 | 350 | 0.0720 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.36.0.dev0 |
| | - Pytorch 2.1.2+cu121 |
| | - Datasets 2.14.6 |
| | - Tokenizers 0.14.1 |
| |
|