sum_model_2r1e_3_20_extract_long_text

only extract text > 500 word

number of sentence in train data = 12 number of sentence in validation data = 12 words in train data: 350 words in summury: 32 after modification: 294 words in validation data: 344 words in summury: 32 after modification 294

This model is a fine-tuned version of weny22/sum_model_t5_saved on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0769
  • Rouge1: 0.2174
  • Rouge2: 0.0883
  • Rougel: 0.1783
  • Rougelsum: 0.1783
  • Gen Len: 18.9707

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.002
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 335 2.1718 0.1974 0.0646 0.1569 0.1572 18.9793
2.7704 2.0 670 2.0191 0.2026 0.0737 0.1627 0.1629 18.99
2.1753 3.0 1005 1.9912 0.209 0.0792 0.1703 0.1704 18.9607
2.1753 4.0 1340 1.9403 0.2051 0.0797 0.1673 0.1673 18.9793
1.9544 5.0 1675 1.9172 0.2162 0.0857 0.1766 0.1766 18.9853
1.8287 6.0 2010 1.9292 0.2115 0.0829 0.1719 0.172 18.9727
1.8287 7.0 2345 1.9303 0.2113 0.0838 0.1722 0.1721 18.972
1.6903 8.0 2680 1.9165 0.2146 0.0843 0.1754 0.1756 18.972
1.6063 9.0 3015 1.9242 0.2166 0.0864 0.1764 0.1765 18.9707
1.6063 10.0 3350 1.9196 0.2143 0.0843 0.175 0.1749 18.986
1.4965 11.0 3685 1.9313 0.2148 0.0847 0.1743 0.1742 18.9893
1.4333 12.0 4020 1.9512 0.2165 0.0873 0.1777 0.1778 18.99
1.4333 13.0 4355 1.9756 0.2163 0.0882 0.1779 0.178 18.9747
1.346 14.0 4690 1.9945 0.2182 0.0893 0.1792 0.1792 18.9873
1.2887 15.0 5025 2.0073 0.2154 0.085 0.1756 0.1758 18.9873
1.2887 16.0 5360 2.0176 0.2166 0.0878 0.1772 0.1773 18.9687
1.2234 17.0 5695 2.0302 0.2172 0.0873 0.178 0.178 18.9767
1.1833 18.0 6030 2.0505 0.2156 0.0854 0.1761 0.1761 18.9653
1.1833 19.0 6365 2.0633 0.2174 0.0875 0.1777 0.1778 18.9787
1.1396 20.0 6700 2.0769 0.2174 0.0883 0.1783 0.1783 18.9707

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
-
Safetensors
Model size
90.5M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for weny22/sum_model_2r1e_3_20_extract_long_text

Finetuned
(14)
this model