nyu-mll/glue
Viewer • Updated • 1.49M • 463k • 495
How to use Hartunka/distilbert_km_10_v1_mnli with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Hartunka/distilbert_km_10_v1_mnli") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Hartunka/distilbert_km_10_v1_mnli")
model = AutoModelForSequenceClassification.from_pretrained("Hartunka/distilbert_km_10_v1_mnli")This model is a fine-tuned version of Hartunka/distilbert_km_10_v1 on the GLUE MNLI dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.984 | 1.0 | 1534 | 0.9150 | 0.5618 |
| 0.8738 | 2.0 | 3068 | 0.8359 | 0.6174 |
| 0.7796 | 3.0 | 4602 | 0.7962 | 0.6512 |
| 0.6962 | 4.0 | 6136 | 0.7838 | 0.6635 |
| 0.6204 | 5.0 | 7670 | 0.8046 | 0.6658 |
| 0.5434 | 6.0 | 9204 | 0.8482 | 0.6699 |
| 0.468 | 7.0 | 10738 | 0.8990 | 0.6671 |
| 0.3994 | 8.0 | 12272 | 1.0205 | 0.6550 |
| 0.3374 | 9.0 | 13806 | 1.1317 | 0.6654 |
Base model
Hartunka/distilbert_km_10_v1