nyu-mll/glue
Viewer • Updated • 1.49M • 458k • 495
How to use Hartunka/tiny_bert_rand_20_v2_stsb with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Hartunka/tiny_bert_rand_20_v2_stsb") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/tiny_bert_rand_20_v2_stsb")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/tiny_bert_rand_20_v2_stsb")# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/tiny_bert_rand_20_v2_stsb")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/tiny_bert_rand_20_v2_stsb")This model is a fine-tuned version of Hartunka/tiny_bert_rand_20_v2 on the GLUE STSB dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|---|---|---|---|---|---|---|
| 3.3593 | 1.0 | 23 | 2.3324 | 0.0965 | 0.0831 | 0.0898 |
| 2.0103 | 2.0 | 46 | 2.6266 | 0.1156 | 0.0993 | 0.1075 |
| 1.8753 | 3.0 | 69 | 2.4261 | 0.1551 | 0.1384 | 0.1468 |
| 1.6951 | 4.0 | 92 | 2.4686 | 0.2094 | 0.2023 | 0.2059 |
| 1.431 | 5.0 | 115 | 2.4817 | 0.2171 | 0.2103 | 0.2137 |
| 1.3014 | 6.0 | 138 | 2.6271 | 0.2217 | 0.2267 | 0.2242 |
Base model
Hartunka/tiny_bert_rand_20_v2
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Hartunka/tiny_bert_rand_20_v2_stsb")