language:
- en
license: apache-2.0
datasets:
- glue
metrics:
- accuracy
model-index:
- name: t5-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST-2
type: glue
args: SST-2
metrics:
- name: Accuracy
type: accuracy
value: 0.9323
T5-base-finetuned-sst2
This model is T5 fine-tuned on GLUE SST-2 dataset. It acheives the following results on the validation set
- Accuracy: 0.9323
Model Details
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
Training procedure
Tokenization
Since, T5 is a text-to-text model, the labels of the dataset are converted as follows: For each example, a sentence as been formed as "sst2 sentence: " + sst2_sent and fed to the tokenizer to get the input_ids and attention_mask. For each label, label is choosen as "positive" if label is 1, else label is "negative" and tokenized to get input_ids and attention_mask . During training, these inputs_ids having pad token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels is given as decoder attention mask.
Training hyperparameters
The following hyperparameters were used during training: - learning_rate: 3e-4 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: epsilon=1e-08 - num_epochs: 2
Training results
| Epoch | Training Loss | Validation Accuracy |
|---|---|---|
| 1 | 0.1045 | 0.9323 |
| 2 | 0.0539 | 0.9243 |