Fine-Tuning ParsBERT for Persian Question Answering
In this project, we fine-tuned the pedramyazdipoor/parsbert_question_answering_PQuAD model using the SajjadAyoubi/persian_qa dataset. The goal was to adapt ParsBERT more effectively to the task of extractive Persian question answering.
Pre-Fine-Tuning Accuracy
Before fine-tuning, the base model (pedramyazdipoor/parsbert_question_answering_PQuAD) achieved the following metrics:
Exact Match: 23.75
F1 Score for: 42.05
Fine-Tuned Results
After fine-tuning, the model showed significant improvement:
Exact Match: 36.88
F1 Score: 51.76
These results confirm that fine-tuning ParsBERT on a domain-specific dataset like Persian QA improves its capability in understanding and extracting correct answers.
Dataset and Model Access
- Dataset: SajjadAyoubi/persian_qa
- Fine-Tuned Model: pedramyazdipoor/parsbert_question_answering_PQuAD
This project demonstrates how a domain-specific fine-tuning process using publicly available datasets can substantially boost the performance of existing Persian language models like ParsBERT on extractive QA tasks.