T5-model-1-feedback-3110

This model is a fine-tuned version of theojolliffe/T5-model-1-feedback-1109 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1605
  • Rouge1: 91.3604
  • Rouge2: 86.1024
  • Rougel: 90.6798
  • Rougelsum: 90.7011
  • Gen Len: 15.7167

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
0.2711 1.0 2279 0.2176 90.3305 83.9311 89.4476 89.4573 15.7
0.1709 2.0 4558 0.1759 91.3226 85.9979 90.7558 90.7395 15.5667
0.1644 3.0 6837 0.1641 91.8385 86.7529 91.1621 91.1492 15.6792
0.1606 4.0 9116 0.1605 91.3604 86.1024 90.6798 90.7011 15.7167

Framework versions

  • Transformers 4.23.1
  • Pytorch 1.12.1+cu113
  • Datasets 2.6.1
  • Tokenizers 0.13.1
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results