YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fine-Tuning ParsBERT for Persian Question Answering

In this project, we fine-tuned the pedramyazdipoor/parsbert_question_answering_PQuAD model using the SajjadAyoubi/persian_qa dataset. The goal was to adapt ParsBERT more effectively to the task of extractive Persian question answering.


Pre-Fine-Tuning Accuracy

Before fine-tuning, the base model (pedramyazdipoor/parsbert_question_answering_PQuAD) achieved the following metrics:

Exact Match: 23.75
F1 Score for: 42.05

Fine-Tuned Results

After fine-tuning, the model showed significant improvement:

Exact Match: 36.88
F1 Score: 51.76

These results confirm that fine-tuning ParsBERT on a domain-specific dataset like Persian QA improves its capability in understanding and extracting correct answers.


Dataset and Model Access


This project demonstrates how a domain-specific fine-tuning process using publicly available datasets can substantially boost the performance of existing Persian language models like ParsBERT on extractive QA tasks.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support