samsum_42

This model is a fine-tuned version of google/t5-v1_1-small on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9358
  • Rouge1: 40.0008
  • Rouge2: 17.5494
  • Rougel: 33.878
  • Rougelsum: 37.1775
  • Gen Len: 20.7677
  • Test Rougel: 33.8419
  • Df Rougel: 33.5024
  • Unlearn Overall Rougel: 0.6698
  • Unlearn Time: 944.6205

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len Overall Rougel Unlearn Overall Rougel Time
No log 1.0 461 2.0235 38.9936 17.1818 33.1425 36.2167 19.2225 0.3841 0.3841 -1
No log 2.0 922 1.9779 38.8206 16.945 33.2932 36.1303 20.0905 0.3454 0.3454 -1
2.8574 3.0 1383 1.9503 39.7639 17.5862 33.6691 36.9658 21.5819 0.5118 0.5118 -1
2.8574 4.0 1844 1.9435 39.6601 17.4391 33.69 36.8957 20.5232 0.4537 0.4537 -1
2.6803 5.0 2305 1.9358 40.0008 17.5494 33.5024 37.1775 20.7677 0.6698 0.6698 -1

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.15.2
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jialicheng/unlearn_samsum_t5-small_random_label_6_42

Finetuned
(58)
this model

Evaluation results