File size: 3,471 Bytes
a6418ac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
base_model: weny22/sum_model_t5_saved
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: extract_long_text_balanced_data
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# extract_long_text_balanced_data

This model is a fine-tuned version of [weny22/sum_model_t5_saved](https://huggingface.co/weny22/sum_model_t5_saved) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3331
- Rouge1: 0.2094
- Rouge2: 0.0794
- Rougel: 0.1697
- Rougelsum: 0.1696
- Gen Len: 18.9853

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20

### Training results

| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log        | 1.0   | 119  | 2.3318          | 0.18   | 0.0552 | 0.1439 | 0.1439    | 18.97   |
| No log        | 2.0   | 238  | 2.1980          | 0.1899 | 0.06   | 0.1503 | 0.1503    | 18.972  |
| No log        | 3.0   | 357  | 2.1448          | 0.1952 | 0.0646 | 0.1542 | 0.1541    | 18.9993 |
| No log        | 4.0   | 476  | 2.1372          | 0.1983 | 0.0683 | 0.1574 | 0.1574    | 18.9453 |
| 2.8015        | 5.0   | 595  | 2.1142          | 0.2    | 0.0725 | 0.1611 | 0.1611    | 18.9933 |
| 2.8015        | 6.0   | 714  | 2.0970          | 0.2027 | 0.0757 | 0.1629 | 0.1629    | 18.9987 |
| 2.8015        | 7.0   | 833  | 2.1187          | 0.2027 | 0.0755 | 0.1637 | 0.1634    | 18.968  |
| 2.8015        | 8.0   | 952  | 2.1222          | 0.2013 | 0.0737 | 0.1619 | 0.1618    | 18.9753 |
| 2.02          | 9.0   | 1071 | 2.1316          | 0.2021 | 0.0764 | 0.1648 | 0.1647    | 18.9667 |
| 2.02          | 10.0  | 1190 | 2.1455          | 0.2109 | 0.0784 | 0.169  | 0.1689    | 18.982  |
| 2.02          | 11.0  | 1309 | 2.1580          | 0.2065 | 0.0781 | 0.167  | 0.1669    | 18.968  |
| 2.02          | 12.0  | 1428 | 2.1792          | 0.2088 | 0.0788 | 0.1693 | 0.169     | 18.9767 |
| 1.683         | 13.0  | 1547 | 2.1958          | 0.2085 | 0.0781 | 0.1689 | 0.1689    | 18.9913 |
| 1.683         | 14.0  | 1666 | 2.2436          | 0.2082 | 0.0785 | 0.1693 | 0.1692    | 18.978  |
| 1.683         | 15.0  | 1785 | 2.2480          | 0.2075 | 0.0797 | 0.1678 | 0.1678    | 18.9853 |
| 1.683         | 16.0  | 1904 | 2.2714          | 0.208  | 0.0789 | 0.1686 | 0.1685    | 18.9887 |
| 1.45          | 17.0  | 2023 | 2.2771          | 0.2091 | 0.0787 | 0.1693 | 0.1691    | 18.98   |
| 1.45          | 18.0  | 2142 | 2.2913          | 0.2103 | 0.0792 | 0.17   | 0.1698    | 18.9873 |
| 1.45          | 19.0  | 2261 | 2.3163          | 0.2094 | 0.0792 | 0.1699 | 0.1697    | 18.9893 |
| 1.45          | 20.0  | 2380 | 2.3331          | 0.2094 | 0.0794 | 0.1697 | 0.1696    | 18.9853 |


### Framework versions

- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2