RoBERTa-large Model for SQuAD

This is a RoBERTa-large model fine-tuned on a combined dataset of SQuAD 1.1 and SQuAD 2.0. It achieves the following performance metrics:

  • Exact Match (EM): 0.7003
  • F1-score: 0.8368

The model was trained for 24 epochs using the following settings:

  • Batch size: 2
  • Learning rate: 3e-6
  • Gradient accumulation steps: 4

Usage

You can load this model using the HuggingFace Transformers library:

from transformers import AutoModelForQuestionAnswering, AutoTokenizer

model_name = "hosseinfarahi/squad-roberta-large"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train hosseinfarahi/squad-roberta-large

Spaces using hosseinfarahi/squad-roberta-large 3

Evaluation results