nyu-mll/glue
Viewer • Updated • 1.49M • 462k • 495
How to use Hartunka/bert_base_rand_50_v2_cola with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Hartunka/bert_base_rand_50_v2_cola") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/bert_base_rand_50_v2_cola")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/bert_base_rand_50_v2_cola")# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/bert_base_rand_50_v2_cola")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/bert_base_rand_50_v2_cola")This model is a fine-tuned version of Hartunka/bert_base_rand_50_v2 on the GLUE COLA dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|---|---|---|---|---|---|
| 0.6114 | 1.0 | 34 | 0.6165 | 0.0 | 0.6913 |
| 0.5885 | 2.0 | 68 | 0.6226 | 0.1136 | 0.6884 |
| 0.5384 | 3.0 | 102 | 0.6438 | 0.0961 | 0.6702 |
| 0.4893 | 4.0 | 136 | 0.7323 | 0.0973 | 0.6644 |
| 0.428 | 5.0 | 170 | 0.7124 | 0.1003 | 0.6568 |
| 0.3752 | 6.0 | 204 | 0.9128 | 0.0781 | 0.6146 |
Base model
Hartunka/bert_base_rand_50_v2
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Hartunka/bert_base_rand_50_v2_cola")