nyu-mll/glue
Viewer • Updated • 1.49M • 462k • 495
How to use Hartunka/bert_base_km_5_v2_stsb with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Hartunka/bert_base_km_5_v2_stsb") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/bert_base_km_5_v2_stsb")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/bert_base_km_5_v2_stsb")# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/bert_base_km_5_v2_stsb")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/bert_base_km_5_v2_stsb")This model is a fine-tuned version of Hartunka/bert_base_km_5_v2 on the GLUE STSB dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|---|---|---|---|---|---|---|
| 2.6459 | 1.0 | 23 | 2.4966 | 0.1835 | 0.1715 | 0.1775 |
| 1.8235 | 2.0 | 46 | 2.2914 | 0.3594 | 0.3550 | 0.3572 |
| 1.4306 | 3.0 | 69 | 2.0514 | 0.4242 | 0.4267 | 0.4254 |
| 1.0154 | 4.0 | 92 | 1.8177 | 0.4729 | 0.4647 | 0.4688 |
| 0.6595 | 5.0 | 115 | 2.1356 | 0.4311 | 0.4288 | 0.4299 |
| 0.5214 | 6.0 | 138 | 2.0065 | 0.4628 | 0.4615 | 0.4621 |
| 0.3787 | 7.0 | 161 | 2.2221 | 0.4495 | 0.4408 | 0.4451 |
| 0.3169 | 8.0 | 184 | 2.1129 | 0.4652 | 0.4569 | 0.4611 |
| 0.2679 | 9.0 | 207 | 2.1071 | 0.4559 | 0.4422 | 0.4491 |
Base model
Hartunka/bert_base_km_5_v2
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Hartunka/bert_base_km_5_v2_stsb")