long_text_balanced_smaller_original_text

The model trained with balanced dataset, without preprocess the training data.

This model is a fine-tuned version of weny22/sum_model_t5_saved on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3143
  • Rouge1: 0.2101
  • Rouge2: 0.0804
  • Rougel: 0.1705
  • Rougelsum: 0.1707
  • Gen Len: 18.986

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.002
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 119 2.3387 0.1849 0.0583 0.1474 0.1475 18.98
No log 2.0 238 2.1768 0.1954 0.0647 0.1538 0.1537 18.9707
No log 3.0 357 2.1282 0.1952 0.0637 0.1537 0.1536 18.9947
No log 4.0 476 2.1173 0.1953 0.0683 0.1559 0.1557 18.9813
2.7944 5.0 595 2.0873 0.2022 0.0743 0.1624 0.1623 18.976
2.7944 6.0 714 2.0851 0.2054 0.0769 0.1652 0.1653 18.9887
2.7944 7.0 833 2.0948 0.2043 0.0762 0.1633 0.1632 18.972
2.7944 8.0 952 2.1123 0.1992 0.0745 0.1607 0.1605 18.9673
1.9807 9.0 1071 2.1280 0.2067 0.0779 0.1669 0.1669 18.9767
1.9807 10.0 1190 2.1251 0.2124 0.0801 0.1705 0.1704 18.99
1.9807 11.0 1309 2.1286 0.2069 0.0772 0.1668 0.1668 18.9927
1.9807 12.0 1428 2.1592 0.2096 0.0786 0.1688 0.1689 18.972
1.6485 13.0 1547 2.1811 0.2069 0.0789 0.1688 0.1689 18.9973
1.6485 14.0 1666 2.2124 0.2089 0.079 0.1686 0.1688 18.968
1.6485 15.0 1785 2.2187 0.2107 0.0797 0.1693 0.1695 18.9893
1.6485 16.0 1904 2.2438 0.2097 0.0793 0.1695 0.1695 18.9787
1.4186 17.0 2023 2.2685 0.2092 0.0799 0.1692 0.1693 18.99
1.4186 18.0 2142 2.2733 0.2085 0.0788 0.1684 0.1686 18.9747
1.4186 19.0 2261 2.2947 0.2087 0.0803 0.1696 0.1696 18.9813
1.4186 20.0 2380 2.3143 0.2101 0.0804 0.1705 0.1707 18.986

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
-
Safetensors
Model size
90.5M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for weny22/long_text_balanced_smaller_original_text

Finetuned
(14)
this model