nyu-mll/glue
Viewer • Updated • 1.49M • 463k • 495
How to use Hartunka/bert_base_rand_100_v2_cola with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Hartunka/bert_base_rand_100_v2_cola") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/bert_base_rand_100_v2_cola")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/bert_base_rand_100_v2_cola")# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/bert_base_rand_100_v2_cola")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/bert_base_rand_100_v2_cola")This model is a fine-tuned version of Hartunka/bert_base_rand_100_v2 on the GLUE COLA dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|---|---|---|---|---|---|
| 0.6165 | 1.0 | 34 | 0.6170 | 0.0 | 0.6913 |
| 0.5893 | 2.0 | 68 | 0.6174 | 0.0185 | 0.6884 |
| 0.5377 | 3.0 | 102 | 0.6451 | 0.1060 | 0.6731 |
| 0.4833 | 4.0 | 136 | 0.7214 | 0.1088 | 0.6683 |
| 0.4189 | 5.0 | 170 | 0.6888 | 0.0984 | 0.6443 |
| 0.3763 | 6.0 | 204 | 0.7857 | 0.1245 | 0.6414 |
Base model
Hartunka/bert_base_rand_100_v2
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Hartunka/bert_base_rand_100_v2_cola")