t5-base-korean-text-summary-mydata
This model is a fine-tuned version of lcw99/t5-base-korean-text-summary on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4475
- Rouge1 Precision: 0.4783
- Rouge1 Recall: 0.5541
- Rouge1 F1: 0.5076
- Rouge2 Precision: 0.3418
- Rouge2 Recall: 0.3964
- Rouge2 F1: 0.3626
- Rouge3 Precision: 0.2566
- Rouge3 Recall: 0.2981
- Rouge3 F1: 0.2722
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 F1 | Rouge2 Precision | Rouge2 Recall | Rouge2 F1 | Rouge3 Precision | Rouge3 Recall | Rouge3 F1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.5281 | 1.0 | 34033 | 0.4581 | 0.4724 | 0.5544 | 0.5046 | 0.3340 | 0.3925 | 0.3569 | 0.2499 | 0.2941 | 0.2672 |
| 0.4748 | 2.0 | 68066 | 0.4475 | 0.4783 | 0.5541 | 0.5076 | 0.3418 | 0.3964 | 0.3626 | 0.2566 | 0.2981 | 0.2722 |
Framework versions
- Transformers 4.40.2
- Pytorch 2.8.0+cu128
- Datasets 2.19.0
- Tokenizers 0.19.1
- Downloads last month
- 24
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for aetin/t5-base-korean-text-summary-mydata
Base model
lcw99/t5-base-korean-text-summary