Llama-3.1-8B-Instruct-PsyCourse-fold1
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the course-train-fold1 dataset. It achieves the following results on the evaluation set:
- Loss: 0.0338
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.5562 | 0.0770 | 50 | 0.4070 |
| 0.1226 | 0.1539 | 100 | 0.0915 |
| 0.0859 | 0.2309 | 150 | 0.0644 |
| 0.0714 | 0.3078 | 200 | 0.0530 |
| 0.054 | 0.3848 | 250 | 0.0493 |
| 0.0553 | 0.4617 | 300 | 0.0478 |
| 0.053 | 0.5387 | 350 | 0.0479 |
| 0.0645 | 0.6156 | 400 | 0.0487 |
| 0.0391 | 0.6926 | 450 | 0.0451 |
| 0.0339 | 0.7695 | 500 | 0.0405 |
| 0.0513 | 0.8465 | 550 | 0.0405 |
| 0.0374 | 0.9234 | 600 | 0.0385 |
| 0.0334 | 1.0004 | 650 | 0.0381 |
| 0.0405 | 1.0773 | 700 | 0.0403 |
| 0.0337 | 1.1543 | 750 | 0.0361 |
| 0.0344 | 1.2312 | 800 | 0.0359 |
| 0.0344 | 1.3082 | 850 | 0.0373 |
| 0.0221 | 1.3851 | 900 | 0.0357 |
| 0.0457 | 1.4621 | 950 | 0.0375 |
| 0.0357 | 1.5391 | 1000 | 0.0365 |
| 0.034 | 1.6160 | 1050 | 0.0356 |
| 0.0394 | 1.6930 | 1100 | 0.0364 |
| 0.0244 | 1.7699 | 1150 | 0.0360 |
| 0.023 | 1.8469 | 1200 | 0.0371 |
| 0.0339 | 1.9238 | 1250 | 0.0358 |
| 0.0252 | 2.0008 | 1300 | 0.0344 |
| 0.019 | 2.0777 | 1350 | 0.0338 |
| 0.0297 | 2.1547 | 1400 | 0.0361 |
| 0.0142 | 2.2316 | 1450 | 0.0371 |
| 0.0227 | 2.3086 | 1500 | 0.0354 |
| 0.0179 | 2.3855 | 1550 | 0.0411 |
| 0.0189 | 2.4625 | 1600 | 0.0363 |
| 0.0246 | 2.5394 | 1650 | 0.0360 |
| 0.0244 | 2.6164 | 1700 | 0.0354 |
| 0.0295 | 2.6933 | 1750 | 0.0355 |
| 0.0257 | 2.7703 | 1800 | 0.0354 |
| 0.0313 | 2.8472 | 1850 | 0.0339 |
| 0.0208 | 2.9242 | 1900 | 0.0348 |
| 0.0314 | 3.0012 | 1950 | 0.0349 |
| 0.0119 | 3.0781 | 2000 | 0.0392 |
| 0.0147 | 3.1551 | 2050 | 0.0425 |
| 0.011 | 3.2320 | 2100 | 0.0402 |
| 0.012 | 3.3090 | 2150 | 0.0416 |
| 0.0205 | 3.3859 | 2200 | 0.0398 |
| 0.0127 | 3.4629 | 2250 | 0.0399 |
| 0.0141 | 3.5398 | 2300 | 0.0419 |
| 0.0081 | 3.6168 | 2350 | 0.0435 |
| 0.0132 | 3.6937 | 2400 | 0.0412 |
| 0.0111 | 3.7707 | 2450 | 0.0433 |
| 0.0069 | 3.8476 | 2500 | 0.0446 |
| 0.0135 | 3.9246 | 2550 | 0.0454 |
| 0.0093 | 4.0015 | 2600 | 0.0452 |
| 0.0024 | 4.0785 | 2650 | 0.0488 |
| 0.0082 | 4.1554 | 2700 | 0.0550 |
| 0.004 | 4.2324 | 2750 | 0.0540 |
| 0.0044 | 4.3093 | 2800 | 0.0571 |
| 0.0027 | 4.3863 | 2850 | 0.0591 |
| 0.0044 | 4.4633 | 2900 | 0.0599 |
| 0.0017 | 4.5402 | 2950 | 0.0583 |
| 0.0034 | 4.6172 | 3000 | 0.0581 |
| 0.0048 | 4.6941 | 3050 | 0.0578 |
| 0.0051 | 4.7711 | 3100 | 0.0577 |
| 0.0053 | 4.8480 | 3150 | 0.0578 |
| 0.0033 | 4.9250 | 3200 | 0.0577 |
Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 149
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for chchen/Llama-3.1-8B-Instruct-PsyCourse-fold1
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct