nyu-mll/glue
Viewer • Updated • 1.49M • 462k • 495
How to use Hartunka/tiny_bert_km_20_v1_stsb with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Hartunka/tiny_bert_km_20_v1_stsb") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/tiny_bert_km_20_v1_stsb")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/tiny_bert_km_20_v1_stsb")# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/tiny_bert_km_20_v1_stsb")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/tiny_bert_km_20_v1_stsb")This model is a fine-tuned version of Hartunka/tiny_bert_km_20_v1 on the GLUE STSB dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|---|---|---|---|---|---|---|
| 3.6231 | 1.0 | 23 | 2.1600 | 0.2054 | 0.1826 | 0.1940 |
| 2.0508 | 2.0 | 46 | 2.4199 | 0.1847 | 0.1774 | 0.1811 |
| 1.9051 | 3.0 | 69 | 2.3382 | 0.2217 | 0.2091 | 0.2154 |
| 1.7517 | 4.0 | 92 | 2.2891 | 0.2602 | 0.2525 | 0.2563 |
| 1.5284 | 5.0 | 115 | 2.3172 | 0.2782 | 0.2719 | 0.2750 |
| 1.3355 | 6.0 | 138 | 2.5107 | 0.2682 | 0.2598 | 0.2640 |
Base model
Hartunka/tiny_bert_km_20_v1
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Hartunka/tiny_bert_km_20_v1_stsb")