nyu-mll/glue
Viewer • Updated • 1.49M • 458k • 495
How to use gokuls/mobilebert_add_GLUE_Experiment_mrpc with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="gokuls/mobilebert_add_GLUE_Experiment_mrpc") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("gokuls/mobilebert_add_GLUE_Experiment_mrpc")
model = AutoModelForSequenceClassification.from_pretrained("gokuls/mobilebert_add_GLUE_Experiment_mrpc")This model is a fine-tuned version of google/mobilebert-uncased on the GLUE MRPC dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|---|---|---|---|---|---|---|
| 0.6387 | 1.0 | 29 | 0.6245 | 0.6838 | 0.8122 | 0.7480 |
| 0.6307 | 2.0 | 58 | 0.6234 | 0.6838 | 0.8122 | 0.7480 |
| 0.6307 | 3.0 | 87 | 0.6233 | 0.6838 | 0.8122 | 0.7480 |
| 0.6295 | 4.0 | 116 | 0.6231 | 0.6838 | 0.8122 | 0.7480 |
| 0.6261 | 5.0 | 145 | 0.6197 | 0.6838 | 0.8122 | 0.7480 |
| 0.6147 | 6.0 | 174 | 0.6344 | 0.6838 | 0.8122 | 0.7480 |
| 0.6209 | 7.0 | 203 | 0.6398 | 0.6838 | 0.8122 | 0.7480 |
| 0.6007 | 8.0 | 232 | 0.6338 | 0.6324 | 0.7517 | 0.6920 |
| 0.5795 | 9.0 | 261 | 0.6377 | 0.625 | 0.7429 | 0.6839 |
| 0.5712 | 10.0 | 290 | 0.6290 | 0.6814 | 0.8036 | 0.7425 |