19091b79c23aa18d1e86cb8d3cce32cd
This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct on the dim/tldr_news dataset. It achieves the following results on the evaluation set:
- Loss: 6.6490
- Data Size: 1.0
- Epoch Runtime: 29.0967
- Accuracy: 0.7351
- F1 Macro: 0.7785
- Rouge1: 0.7358
- Rouge2: 0.0
- Rougel: 0.7351
- Rougelsum: 0.7351
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Accuracy | F1 Macro | Rouge1 | Rouge2 | Rougel | Rougelsum |
|---|---|---|---|---|---|---|---|---|---|---|---|
| No log | 0 | 0 | 11.0898 | 0 | 2.7847 | 0.1648 | 0.1248 | 0.1641 | 0.0 | 0.1648 | 0.1641 |
| No log | 1 | 178 | 22.2126 | 0.0078 | 2.7055 | 0.2571 | 0.1008 | 0.2571 | 0.0 | 0.2578 | 0.2564 |
| No log | 2 | 356 | 5.7174 | 0.0156 | 4.1475 | 0.4240 | 0.2367 | 0.4240 | 0.0 | 0.4240 | 0.4240 |
| No log | 3 | 534 | 5.1037 | 0.0312 | 6.3905 | 0.5646 | 0.4126 | 0.5639 | 0.0 | 0.5646 | 0.5646 |
| No log | 4 | 712 | 3.9046 | 0.0625 | 7.5815 | 0.7088 | 0.5644 | 0.7095 | 0.0 | 0.7095 | 0.7088 |
| No log | 5 | 890 | 3.2202 | 0.125 | 9.1255 | 0.7053 | 0.6141 | 0.7067 | 0.0 | 0.7053 | 0.7060 |
| 0.3024 | 6 | 1068 | 2.7916 | 0.25 | 12.1653 | 0.7280 | 0.6380 | 0.7294 | 0.0 | 0.7280 | 0.7287 |
| 2.5771 | 7 | 1246 | 2.9160 | 0.5 | 19.0042 | 0.7280 | 0.7155 | 0.7287 | 0.0 | 0.7290 | 0.7280 |
| 1.9239 | 8.0 | 1424 | 2.7414 | 1.0 | 32.4775 | 0.75 | 0.7325 | 0.7514 | 0.0 | 0.7507 | 0.75 |
| 0.9656 | 9.0 | 1602 | 3.6120 | 1.0 | 32.0793 | 0.7528 | 0.7548 | 0.7539 | 0.0 | 0.7525 | 0.7536 |
| 0.6819 | 10.0 | 1780 | 5.4613 | 1.0 | 34.0914 | 0.7550 | 0.7944 | 0.7557 | 0.0 | 0.7557 | 0.7550 |
| 0.5557 | 11.0 | 1958 | 6.7016 | 1.0 | 29.0314 | 0.7401 | 0.7616 | 0.7401 | 0.0 | 0.7401 | 0.7401 |
| 0.5357 | 12.0 | 2136 | 6.6490 | 1.0 | 29.0967 | 0.7351 | 0.7785 | 0.7358 | 0.0 | 0.7351 | 0.7351 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.3.0
- Tokenizers 0.22.1
- Downloads last month
- -
Model tree for contemmcm/19091b79c23aa18d1e86cb8d3cce32cd
Base model
meta-llama/Llama-3.2-1B-Instruct