samsum_42
This model is a fine-tuned version of google/t5-v1_1-large on the samsum dataset. It achieves the following results on the evaluation set:
- Loss: 1.4219
- Rouge1: 47.7352
- Rouge2: 22.4393
- Rougel: 36.3483
- Rougelsum: 42.0385
- Gen Len: 25.8987
- Test Rougel: 36.3238
- Df Rougel: 34.8151
- Unlearn Overall Rougel: 1.2543
- Unlearn Time: 9997.5910
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Overall Rougel | Unlearn Overall Rougel | Time |
|---|---|---|---|---|---|---|---|---|---|---|---|
| No log | 1.0 | 460 | 1.4360 | 45.9058 | 21.3022 | 33.9975 | 40.6107 | 24.8912 | 0.8854 | 0.8854 | -1 |
| No log | 2.0 | 920 | 1.4226 | 36.5926 | 16.101 | 28.0551 | 32.8062 | 22.1212 | 0.8466 | 0.8466 | -1 |
| 1.6998 | 3.0 | 1380 | 1.4203 | 47.7797 | 22.4865 | 34.9477 | 41.9757 | 25.7325 | 1.2035 | 1.2035 | -1 |
| 1.6998 | 4.0 | 1840 | 1.4220 | 48.1146 | 22.5913 | 35.3269 | 42.394 | 26.2737 | 1.1273 | 1.1273 | -1 |
| 1.5288 | 5.0 | 2300 | 1.4219 | 47.7352 | 22.4393 | 34.8151 | 42.0385 | 25.8987 | 1.2543 | 1.2543 | -1 |
Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for jialicheng/unlearn_samsum_t5-large_random_label_6_42
Base model
google/t5-v1_1-largeEvaluation results
- Rouge1 on samsumself-reported47.735