nyu-mll/glue
Viewer • Updated • 1.49M • 462k • 495
How to use Hartunka/tiny_bert_rand_10_v1_rte with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Hartunka/tiny_bert_rand_10_v1_rte") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/tiny_bert_rand_10_v1_rte")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/tiny_bert_rand_10_v1_rte")# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/tiny_bert_rand_10_v1_rte")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/tiny_bert_rand_10_v1_rte")This model is a fine-tuned version of Hartunka/tiny_bert_rand_10_v1 on the GLUE RTE dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.7007 | 1.0 | 10 | 0.6918 | 0.5162 |
| 0.6835 | 2.0 | 20 | 0.6920 | 0.5596 |
| 0.6616 | 3.0 | 30 | 0.7088 | 0.5740 |
| 0.6219 | 4.0 | 40 | 0.7803 | 0.5523 |
| 0.5651 | 5.0 | 50 | 0.8309 | 0.5379 |
| 0.4986 | 6.0 | 60 | 0.9334 | 0.5126 |
Base model
Hartunka/tiny_bert_rand_10_v1
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Hartunka/tiny_bert_rand_10_v1_rte")