nyu-mll/glue
Viewer • Updated • 1.49M • 463k • 495
How to use Hartunka/bert_base_rand_5_v1_stsb with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Hartunka/bert_base_rand_5_v1_stsb") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/bert_base_rand_5_v1_stsb")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/bert_base_rand_5_v1_stsb")# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/bert_base_rand_5_v1_stsb")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/bert_base_rand_5_v1_stsb")This model is a fine-tuned version of Hartunka/bert_base_rand_5_v1 on the GLUE STSB dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|---|---|---|---|---|---|---|
| 2.7364 | 1.0 | 23 | 2.7827 | 0.0963 | 0.0802 | 0.0883 |
| 1.8997 | 2.0 | 46 | 2.2682 | 0.2151 | 0.1983 | 0.2067 |
| 1.5738 | 3.0 | 69 | 2.3547 | 0.2811 | 0.2677 | 0.2744 |
| 1.242 | 4.0 | 92 | 2.3781 | 0.3120 | 0.3112 | 0.3116 |
| 0.8939 | 5.0 | 115 | 2.4723 | 0.3232 | 0.3164 | 0.3198 |
| 0.725 | 6.0 | 138 | 2.5860 | 0.3181 | 0.3054 | 0.3117 |
| 0.5201 | 7.0 | 161 | 2.3986 | 0.3379 | 0.3299 | 0.3339 |
Base model
Hartunka/bert_base_rand_5_v1
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Hartunka/bert_base_rand_5_v1_stsb")