e2dcc4f656c7d5f191b39077da5c84eb
This model is a fine-tuned version of Qwen/Qwen2.5-1.5B on the fancyzhx/dbpedia_14 dataset. It achieves the following results on the evaluation set:
- Loss: 0.3547
- Data Size: 1.0
- Epoch Runtime: 3713.0902
- Accuracy: 0.9879
- F1 Macro: 0.9880
- Rouge1: 0.9879
- Rouge2: 0.0
- Rougel: 0.9879
- Rougelsum: 0.9879
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Accuracy | F1 Macro | Rouge1 | Rouge2 | Rougel | Rougelsum |
|---|---|---|---|---|---|---|---|---|---|---|---|
| No log | 0 | 0 | 21.2482 | 0 | 132.7540 | 0.1160 | 0.0698 | 0.1161 | 0.0 | 0.1160 | 0.1160 |
| 0.7889 | 1 | 17500 | 0.7406 | 0.0078 | 162.9353 | 0.9730 | 0.9729 | 0.9730 | 0.0 | 0.9730 | 0.9730 |
| 0.612 | 2 | 35000 | 0.4655 | 0.0156 | 189.8697 | 0.9803 | 0.9803 | 0.9803 | 0.0 | 0.9803 | 0.9803 |
| 0.3241 | 3 | 52500 | 0.4839 | 0.0312 | 250.5697 | 0.9802 | 0.9803 | 0.9803 | 0.0 | 0.9802 | 0.9803 |
| 0.3493 | 4 | 70000 | 0.3574 | 0.0625 | 360.2721 | 0.9837 | 0.9838 | 0.9837 | 0.0 | 0.9837 | 0.9837 |
| 0.258 | 5 | 87500 | 0.3628 | 0.125 | 582.8575 | 0.9820 | 0.9820 | 0.9820 | 0.0 | 0.9820 | 0.9820 |
| 0.2687 | 6 | 105000 | 0.2212 | 0.25 | 1034.9854 | 0.9878 | 0.9878 | 0.9878 | 0.0 | 0.9878 | 0.9878 |
| 0.0016 | 7 | 122500 | 0.2321 | 0.5 | 1931.5358 | 0.9879 | 0.9879 | 0.9879 | 0.0 | 0.9879 | 0.9879 |
| 0.1505 | 8.0 | 140000 | 0.2099 | 1.0 | 3720.7778 | 0.9896 | 0.9896 | 0.9896 | 0.0 | 0.9896 | 0.9896 |
| 0.0799 | 9.0 | 157500 | 0.2330 | 1.0 | 3724.0398 | 0.9883 | 0.9884 | 0.9883 | 0.0 | 0.9883 | 0.9883 |
| 0.1063 | 10.0 | 175000 | 0.2673 | 1.0 | 3732.2440 | 0.9890 | 0.9890 | 0.9890 | 0.0 | 0.9890 | 0.9890 |
| 0.067 | 11.0 | 192500 | 0.3009 | 1.0 | 3706.3550 | 0.9890 | 0.9889 | 0.9890 | 0.0 | 0.9890 | 0.9890 |
| 0.0324 | 12.0 | 210000 | 0.3547 | 1.0 | 3713.0902 | 0.9879 | 0.9880 | 0.9879 | 0.0 | 0.9879 | 0.9879 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.2.0
- Tokenizers 0.22.1
- Downloads last month
- -
Model tree for contemmcm/e2dcc4f656c7d5f191b39077da5c84eb
Base model
Qwen/Qwen2.5-1.5B