T5-base-finetuned-qqp

This model is T5 fine-tuned on GLUE QQP dataset. It acheives the following results on the validation set

  • Accuracy: 0.9123

Model Details

T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.

Training procedure

Tokenization

Since, T5 is a text-to-text model, the labels of the dataset are converted as follows: For each example, a sentence as been formed as "qqp question1: " + qqp_question1 + "question2: " + qqp_question2 and fed to the tokenizer to get the input_ids and attention_mask. For each label, label is choosen as "duplicate" if label is 1, else label is "not_duplicate" and tokenized to get input_ids and attention_mask . During training, these inputs_ids having pad token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels is given as decoder attention mask.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-4
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: epsilon=1e-08
  • num_epochs: 3

Training results

Epoch Training Loss Validation Accuracy
1 0.0672 0.8888
2 0.0428 0.9082
3 0.0231 0.9123
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train PavanNeerudu/t5-base-finetuned-qqp

Evaluation results