| | --- |
| | license: apache-2.0 |
| | tags: |
| | - generated_from_trainer |
| | datasets: |
| | - scitldr |
| | metrics: |
| | - rouge |
| | model-index: |
| | - name: paper-summary |
| | results: |
| | - task: |
| | name: Sequence-to-sequence Language Modeling |
| | type: text2text-generation |
| | dataset: |
| | name: scitldr |
| | type: scitldr |
| | config: Abstract |
| | split: train |
| | args: Abstract |
| | metrics: |
| | - name: Rouge1 |
| | type: rouge |
| | value: 0.3484 |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # paper-summary |
| |
|
| | This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the scitldr dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 2.8631 |
| | - Rouge1: 0.3484 |
| | - Rouge2: 0.1596 |
| | - Rougel: 0.2971 |
| | - Rougelsum: 0.3047 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 0.0001 |
| | - train_batch_size: 32 |
| | - eval_batch_size: 32 |
| | - seed: 42 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: linear |
| | - num_epochs: 8 |
| | - mixed_precision_training: Native AMP |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |
| | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| |
| | | 3.0545 | 1.0 | 63 | 2.9939 | 0.3387 | 0.1538 | 0.2887 | 0.2957 | |
| | | 2.7871 | 2.0 | 126 | 2.9360 | 0.3448 | 0.1577 | 0.2947 | 0.3019 | |
| | | 2.7188 | 3.0 | 189 | 2.8977 | 0.3477 | 0.1585 | 0.2967 | 0.3035 | |
| | | 2.6493 | 4.0 | 252 | 2.8837 | 0.3488 | 0.1597 | 0.2973 | 0.3046 | |
| | | 2.6207 | 5.0 | 315 | 2.8690 | 0.3472 | 0.1566 | 0.2958 | 0.3033 | |
| | | 2.5893 | 6.0 | 378 | 2.8668 | 0.3493 | 0.1592 | 0.2972 | 0.305 | |
| | | 2.5494 | 7.0 | 441 | 2.8657 | 0.3486 | 0.1595 | 0.2976 | 0.3053 | |
| | | 2.5554 | 8.0 | 504 | 2.8631 | 0.3484 | 0.1596 | 0.2971 | 0.3047 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.24.0 |
| | - Pytorch 1.12.1+cu113 |
| | - Datasets 2.6.1 |
| | - Tokenizers 0.13.1 |
| |
|