allenai/scifact
Updated • 1.97k • 26
How to use nikolamilosevic/SCIFACT_xlm_roberta_large with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="nikolamilosevic/SCIFACT_xlm_roberta_large") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("nikolamilosevic/SCIFACT_xlm_roberta_large")
model = AutoModelForSequenceClassification.from_pretrained("nikolamilosevic/SCIFACT_xlm_roberta_large")This model is a fine-tuned version of xlm-roberta-large on the SciFact dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| No log | 1.0 | 378 | 1.0485 | 0.4724 |
| 1.0382 | 2.0 | 756 | 1.3964 | 0.6063 |
| 0.835 | 3.0 | 1134 | 0.9168 | 0.8268 |
| 0.6801 | 4.0 | 1512 | 0.7524 | 0.8425 |
| 0.6801 | 5.0 | 1890 | 1.0672 | 0.8346 |
| 0.4291 | 6.0 | 2268 | 0.9599 | 0.8425 |
| 0.2604 | 7.0 | 2646 | 0.8691 | 0.8661 |
| 0.1932 | 8.0 | 3024 | 1.3162 | 0.8268 |
| 0.1932 | 9.0 | 3402 | 1.3200 | 0.8583 |
| 0.0974 | 10.0 | 3780 | 1.1566 | 0.8740 |
| 0.1051 | 11.0 | 4158 | 1.1568 | 0.8819 |
| 0.0433 | 12.0 | 4536 | 1.2013 | 0.8661 |
| 0.0433 | 13.0 | 4914 | 1.1557 | 0.8819 |
| 0.034 | 14.0 | 5292 | 1.3044 | 0.8661 |
| 0.0303 | 15.0 | 5670 | 1.2496 | 0.8819 |
Base model
FacebookAI/xlm-roberta-large