nyu-mll/glue
Viewer • Updated • 1.49M • 462k • 495
How to use Hartunka/tiny_bert_km_20_v1_mrpc with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Hartunka/tiny_bert_km_20_v1_mrpc") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/tiny_bert_km_20_v1_mrpc")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/tiny_bert_km_20_v1_mrpc")# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/tiny_bert_km_20_v1_mrpc")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/tiny_bert_km_20_v1_mrpc")This model is a fine-tuned version of Hartunka/tiny_bert_km_20_v1 on the GLUE MRPC dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|---|---|---|---|---|---|---|
| 0.6272 | 1.0 | 15 | 0.5963 | 0.6961 | 0.8075 | 0.7518 |
| 0.5912 | 2.0 | 30 | 0.5992 | 0.6961 | 0.8160 | 0.7561 |
| 0.5662 | 3.0 | 45 | 0.5904 | 0.7157 | 0.8253 | 0.7705 |
| 0.5452 | 4.0 | 60 | 0.5803 | 0.7157 | 0.8204 | 0.7681 |
| 0.5011 | 5.0 | 75 | 0.5830 | 0.7279 | 0.8207 | 0.7743 |
| 0.4413 | 6.0 | 90 | 0.6183 | 0.7059 | 0.7931 | 0.7495 |
| 0.3632 | 7.0 | 105 | 0.6797 | 0.6961 | 0.7926 | 0.7444 |
| 0.2641 | 8.0 | 120 | 0.7895 | 0.7108 | 0.8013 | 0.7561 |
| 0.1796 | 9.0 | 135 | 0.8991 | 0.6912 | 0.7805 | 0.7358 |
Base model
Hartunka/tiny_bert_km_20_v1
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Hartunka/tiny_bert_km_20_v1_mrpc")