--- base_model: weny22/sum_model_t5_saved tags: - generated_from_trainer metrics: - rouge model-index: - name: extract_long_text_unbalanced_smaller_5 results: [] --- # extract_long_text_unbalanced_smaller_5 This model is a fine-tuned version of [weny22/sum_model_t5_saved](https://huggingface.co/weny22/sum_model_t5_saved) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2242 - Rouge1: 0.2008 - Rouge2: 0.0688 - Rougel: 0.1593 - Rougelsum: 0.1594 - Gen Len: 18.9847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 72 | 2.3970 | 0.1842 | 0.0572 | 0.1461 | 0.1458 | 18.98 | | No log | 2.0 | 144 | 2.2826 | 0.1923 | 0.0623 | 0.1516 | 0.1515 | 19.0 | | No log | 3.0 | 216 | 2.2308 | 0.1945 | 0.0634 | 0.1529 | 0.1527 | 18.9953 | | No log | 4.0 | 288 | 2.1962 | 0.1944 | 0.0636 | 0.1528 | 0.1527 | 18.9967 | | No log | 5.0 | 360 | 2.1940 | 0.1948 | 0.0633 | 0.1529 | 0.1528 | 18.9953 | | No log | 6.0 | 432 | 2.1734 | 0.1882 | 0.0628 | 0.1492 | 0.1491 | 18.99 | | 3.0387 | 7.0 | 504 | 2.1584 | 0.1964 | 0.0663 | 0.156 | 0.1559 | 18.992 | | 3.0387 | 8.0 | 576 | 2.1588 | 0.197 | 0.068 | 0.1563 | 0.1562 | 18.9847 | | 3.0387 | 9.0 | 648 | 2.1852 | 0.1967 | 0.0669 | 0.156 | 0.1559 | 18.9793 | | 3.0387 | 10.0 | 720 | 2.1859 | 0.201 | 0.0685 | 0.159 | 0.1587 | 18.982 | | 3.0387 | 11.0 | 792 | 2.1760 | 0.1936 | 0.0643 | 0.1534 | 0.1531 | 18.9953 | | 3.0387 | 12.0 | 864 | 2.2081 | 0.1978 | 0.0672 | 0.1566 | 0.1564 | 18.9753 | | 3.0387 | 13.0 | 936 | 2.2030 | 0.1991 | 0.068 | 0.1584 | 0.158 | 18.9833 | | 2.204 | 14.0 | 1008 | 2.2029 | 0.1981 | 0.0686 | 0.1578 | 0.1578 | 18.9867 | | 2.204 | 15.0 | 1080 | 2.2076 | 0.2016 | 0.0694 | 0.1595 | 0.1592 | 18.9773 | | 2.204 | 16.0 | 1152 | 2.2172 | 0.203 | 0.0716 | 0.1617 | 0.1617 | 18.9893 | | 2.204 | 17.0 | 1224 | 2.2136 | 0.2018 | 0.0697 | 0.1604 | 0.1603 | 18.9827 | | 2.204 | 18.0 | 1296 | 2.2147 | 0.2016 | 0.0695 | 0.1601 | 0.1599 | 18.988 | | 2.204 | 19.0 | 1368 | 2.2224 | 0.2007 | 0.0687 | 0.1592 | 0.1592 | 18.9847 | | 2.204 | 20.0 | 1440 | 2.2242 | 0.2008 | 0.0688 | 0.1593 | 0.1594 | 18.9847 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2