Llama-Instruct-8B
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.2965
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 2.1139 | 0.1144 | 50 | 1.8861 |
| 1.3487 | 0.2288 | 100 | 0.6872 |
| 0.4797 | 0.3432 | 150 | 0.4065 |
| 0.3914 | 0.4577 | 200 | 0.3877 |
| 0.3808 | 0.5721 | 250 | 0.3773 |
| 0.3682 | 0.6865 | 300 | 0.3622 |
| 0.3539 | 0.8009 | 350 | 0.3459 |
| 0.3333 | 0.9153 | 400 | 0.3344 |
| 0.3278 | 1.0297 | 450 | 0.3261 |
| 0.3227 | 1.1442 | 500 | 0.3215 |
| 0.3182 | 1.2586 | 550 | 0.3185 |
| 0.315 | 1.3730 | 600 | 0.3156 |
| 0.3117 | 1.4874 | 650 | 0.3142 |
| 0.3108 | 1.6018 | 700 | 0.3122 |
| 0.3083 | 1.7162 | 750 | 0.3113 |
| 0.3086 | 1.8307 | 800 | 0.3089 |
| 0.3083 | 1.9451 | 850 | 0.3075 |
| 0.3054 | 2.0595 | 900 | 0.3070 |
| 0.3043 | 2.1739 | 950 | 0.3054 |
| 0.301 | 2.2883 | 1000 | 0.3040 |
| 0.3023 | 2.4027 | 1050 | 0.3034 |
| 0.2988 | 2.5172 | 1100 | 0.3025 |
| 0.2988 | 2.6316 | 1150 | 0.3023 |
| 0.2988 | 2.7460 | 1200 | 0.3007 |
| 0.2987 | 2.8604 | 1250 | 0.3002 |
| 0.2974 | 2.9748 | 1300 | 0.2999 |
| 0.2966 | 3.0892 | 1350 | 0.2991 |
| 0.2966 | 3.2037 | 1400 | 0.2988 |
| 0.2963 | 3.3181 | 1450 | 0.2981 |
| 0.295 | 3.4325 | 1500 | 0.2979 |
| 0.2931 | 3.5469 | 1550 | 0.2974 |
| 0.2944 | 3.6613 | 1600 | 0.2972 |
| 0.2937 | 3.7757 | 1650 | 0.2967 |
| 0.2904 | 3.8902 | 1700 | 0.2965 |
Framework versions
- PEFT 0.14.0
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 333
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for towhid2000bd/Llama-Instruct-8B
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct