nyu-mll/glue
Viewer • Updated • 1.49M • 463k • 495
How to use Hartunka/bert_base_rand_10_v2_cola with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Hartunka/bert_base_rand_10_v2_cola") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/bert_base_rand_10_v2_cola")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/bert_base_rand_10_v2_cola")# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/bert_base_rand_10_v2_cola")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/bert_base_rand_10_v2_cola")This model is a fine-tuned version of Hartunka/bert_base_rand_10_v2 on the GLUE COLA dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|---|---|---|---|---|---|
| 0.6124 | 1.0 | 34 | 0.6198 | 0.0 | 0.6913 |
| 0.5889 | 2.0 | 68 | 0.6246 | 0.0413 | 0.6903 |
| 0.5412 | 3.0 | 102 | 0.6388 | 0.0297 | 0.6846 |
| 0.4976 | 4.0 | 136 | 0.7266 | 0.1223 | 0.6558 |
| 0.4461 | 5.0 | 170 | 0.6883 | 0.1223 | 0.6731 |
| 0.4039 | 6.0 | 204 | 0.8319 | 0.0785 | 0.6433 |
Base model
Hartunka/bert_base_rand_10_v2
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Hartunka/bert_base_rand_10_v2_cola")