klue/klue
Viewer • Updated • 206k • 7.21k • 92
How to use namvandy/bert-base-finetuned-sts with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="namvandy/bert-base-finetuned-sts") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("namvandy/bert-base-finetuned-sts")
model = AutoModelForSequenceClassification.from_pretrained("namvandy/bert-base-finetuned-sts")This model is a fine-tuned version of klue/bert-base on the klue dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Pearsonr |
|---|---|---|---|---|
| 0.2345 | 1.0 | 2917 | 0.7037 | 0.8757 |
| 0.1491 | 2.0 | 5834 | 0.4869 | 0.8846 |
| 0.097 | 3.0 | 8751 | 0.4023 | 0.9041 |
| 0.0735 | 4.0 | 11668 | 0.3960 | 0.9073 |
| 0.0644 | 5.0 | 14585 | 0.4838 | 0.8989 |
| 0.0446 | 6.0 | 17502 | 0.3990 | 0.9078 |
| 0.0355 | 7.0 | 20419 | 0.3951 | 0.9116 |
| 0.0277 | 8.0 | 23336 | 0.4284 | 0.9053 |
| 0.0239 | 9.0 | 26253 | 0.4166 | 0.9073 |
| 0.0205 | 10.0 | 29170 | 0.4234 | 0.9062 |