| | --- |
| | license: apache-2.0 |
| | base_model: google/t5-efficient-tiny |
| | tags: |
| | - generated_from_trainer |
| | metrics: |
| | - rouge |
| | model-index: |
| | - name: denoice-finetuned-xsum |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # denoice-finetuned-xsum |
| |
|
| | This model is a fine-tuned version of [google/t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) on an unknown dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 1.0564 |
| | - Rouge1: 63.8802 |
| | - Rouge2: 45.4086 |
| | - Rougel: 63.8882 |
| | - Rougelsum: 63.8316 |
| | - Gen Len: 17.2016 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 2e-05 |
| | - train_batch_size: 500 |
| | - eval_batch_size: 500 |
| | - seed: 42 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: linear |
| | - num_epochs: 25 |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |
| | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| |
| | | No log | 1.0 | 76 | 1.1573 | 62.0366 | 43.6573 | 62.0258 | 61.9691 | 17.2068 | |
| | | No log | 2.0 | 152 | 1.1458 | 61.7366 | 43.5997 | 61.7261 | 61.6638 | 17.2408 | |
| | | No log | 3.0 | 228 | 1.1342 | 62.8021 | 44.3773 | 62.8168 | 62.7397 | 17.178 | |
| | | No log | 4.0 | 304 | 1.1221 | 62.5511 | 44.4096 | 62.3775 | 62.3239 | 17.1518 | |
| | | No log | 5.0 | 380 | 1.1177 | 63.0909 | 44.9863 | 62.9819 | 62.9072 | 17.1702 | |
| | | No log | 6.0 | 456 | 1.1123 | 62.5334 | 44.2764 | 62.4559 | 62.4037 | 17.2173 | |
| | | 1.5445 | 7.0 | 532 | 1.1073 | 62.8456 | 44.711 | 62.7463 | 62.7041 | 17.2016 | |
| | | 1.5445 | 8.0 | 608 | 1.0983 | 63.0763 | 44.9468 | 62.9522 | 62.9795 | 17.2147 | |
| | | 1.5445 | 9.0 | 684 | 1.0952 | 62.9383 | 44.9129 | 62.8777 | 62.8081 | 17.2487 | |
| | | 1.5445 | 10.0 | 760 | 1.0947 | 62.8263 | 44.5132 | 62.7596 | 62.7362 | 17.233 | |
| | | 1.5445 | 11.0 | 836 | 1.0801 | 63.0087 | 44.8035 | 63.0091 | 62.9498 | 17.1806 | |
| | | 1.5445 | 12.0 | 912 | 1.0781 | 62.9718 | 44.6364 | 62.881 | 62.8786 | 17.1832 | |
| | | 1.5445 | 13.0 | 988 | 1.0767 | 63.0711 | 44.7516 | 62.9967 | 62.9834 | 17.199 | |
| | | 1.4815 | 14.0 | 1064 | 1.0722 | 63.1128 | 44.8069 | 63.0483 | 63.0151 | 17.2068 | |
| | | 1.4815 | 15.0 | 1140 | 1.0719 | 63.2282 | 44.9567 | 63.2052 | 63.1787 | 17.2147 | |
| | | 1.4815 | 16.0 | 1216 | 1.0684 | 63.3222 | 44.916 | 63.322 | 63.2505 | 17.199 | |
| | | 1.4815 | 17.0 | 1292 | 1.0668 | 63.1931 | 44.9734 | 63.1833 | 63.114 | 17.2251 | |
| | | 1.4815 | 18.0 | 1368 | 1.0640 | 63.5689 | 45.1652 | 63.62 | 63.5671 | 17.1806 | |
| | | 1.4815 | 19.0 | 1444 | 1.0600 | 63.5552 | 45.2046 | 63.5795 | 63.5295 | 17.199 | |
| | | 1.4452 | 20.0 | 1520 | 1.0593 | 63.5801 | 45.2453 | 63.5856 | 63.5245 | 17.199 | |
| | | 1.4452 | 21.0 | 1596 | 1.0594 | 63.6291 | 45.1114 | 63.6412 | 63.5951 | 17.2042 | |
| | | 1.4452 | 22.0 | 1672 | 1.0571 | 63.9129 | 45.3688 | 63.914 | 63.8618 | 17.1702 | |
| | | 1.4452 | 23.0 | 1748 | 1.0573 | 63.8608 | 45.3548 | 63.857 | 63.8156 | 17.2042 | |
| | | 1.4452 | 24.0 | 1824 | 1.0571 | 63.875 | 45.3997 | 63.8858 | 63.8202 | 17.2094 | |
| | | 1.4452 | 25.0 | 1900 | 1.0564 | 63.8802 | 45.4086 | 63.8882 | 63.8316 | 17.2016 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.36.2 |
| | - Pytorch 1.13.1 |
| | - Datasets 2.16.1 |
| | - Tokenizers 0.15.0 |
| |
|