samsum_42

This model is a fine-tuned version of google/t5-v1_1-small on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9516
  • Rouge1: 40.2147
  • Rouge2: 17.9635
  • Rougel: 34.1287
  • Rougelsum: 37.3694
  • Gen Len: 21.5648
  • Test Rougel: 34.0904
  • Df Rougel: 34.311
  • Unlearn Overall Rougel: 0.3897
  • Unlearn Time: 208.1742

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Rougel Gen Len Validation Loss Rouge1 Rouge2 Rougelsum Unlearn Overall Rougel Overall Rougel Time
No log 1.0 461 32.6827 18.7335 2.0240 38.4237 16.8873 35.6564 0.1403 0.1403 -1
No log 2.0 922 33.6443 20.8643 1.9751 39.6569 17.4973 36.7824 0.2372 0.2372 -1
2.9467 3.0 1383 34.0904 21.5648 1.9516 40.2147 17.9635 37.3694 0.3897 0.3897 -1
2.9467 4.0 1844 34.3169 20.9315 1.9410 40.5306 18.0501 37.557 0.3297 0.3297 -1
2.7294 5.0 2305 1.9369 40.5018 17.9761 34.5748 37.5502 21.0147 0.3582 0.3582 -1

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.15.2
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jialicheng/unlearn_samsum_t5-small_random_label_10_42

Finetuned
(58)
this model

Evaluation results