nyu-mll/glue
Viewer • Updated • 1.49M • 462k • 495
How to use thrunlab/t5-large_cola_dense_epochs-7_decoder_all_sparsity10 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="thrunlab/t5-large_cola_dense_epochs-7_decoder_all_sparsity10") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("thrunlab/t5-large_cola_dense_epochs-7_decoder_all_sparsity10")
model = AutoModelForSequenceClassification.from_pretrained("thrunlab/t5-large_cola_dense_epochs-7_decoder_all_sparsity10")This model is a fine-tuned version of t5-large on the glue dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.5441 | 0.37 | 25 | 0.5813 | 0.6913 |
| 0.3969 | 0.75 | 50 | 0.5219 | 0.8044 |
| 0.3537 | 1.12 | 75 | 0.4713 | 0.8313 |
| 0.2905 | 1.49 | 100 | 0.6308 | 0.8150 |
| 0.3157 | 1.87 | 125 | 0.4301 | 0.8341 |
| 0.2208 | 2.24 | 150 | 2.3147 | 0.8332 |
| 0.2231 | 2.61 | 175 | 0.4612 | 0.8341 |
| 0.2404 | 2.99 | 200 | 1.5471 | 0.8265 |
| 0.1697 | 3.36 | 225 | 0.8701 | 0.8313 |
| 0.131 | 3.73 | 250 | 1.2642 | 0.8380 |
| 0.1219 | 4.1 | 275 | 0.9926 | 0.8370 |
| 0.2647 | 4.48 | 300 | 5.1919 | 0.8341 |
| 0.1329 | 4.85 | 325 | 2.2726 | 0.8418 |
| 0.0857 | 5.22 | 350 | 4.2193 | 0.8370 |
| 0.0989 | 5.6 | 375 | 5.3604 | 0.8389 |
| 0.2557 | 5.97 | 400 | 3.0246 | 0.8341 |
| 0.2617 | 6.34 | 425 | 5.6630 | 0.8456 |
| 0.2526 | 6.72 | 450 | 6.0474 | 0.8360 |
Base model
google-t5/t5-large