File size: 1,272 Bytes
80b0686
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
language: fa
tags:
- parsbert
- bert
- question-answering
- nlp
license: mit
base_model: hooshvare/parsbert-base-uncased
---

# PQuAD: Persian Question Answering Model

This model is a fine-tuned version of **[ParsBERT](https://huggingface.co/hooshvare/parsbert-base-uncased)** (state-of-the-art Persian language model) for the task of **Question Answering**.

It was trained on a proprietary Persian QA dataset as part of a BSc thesis at **Amirkabir University of Technology**.

## Model Details
- **Base Model:** ParsBERT (Hooshvare Lab)
- **Task:** Extractive Question Answering
- **Language:** Persian (Farsi)
- **Framework:** PyTorch & Transformers

## How to Use
You can use this model directly with the Hugging Face `pipeline`:

```python
from transformers import pipeline

# Load the pipeline
qa_pipeline = pipeline("question-answering", model="newsha/PQuAD")

context = "دانشگاه صنعتی امیرکبیر یکی از با‌سابقه‌ترین دانشگاه‌های فنی ایران است که در سال ۱۳۳۷ در تهران تأسیس شد."
question = "دانشگاه امیرکبیر در چه سالی تأسیس شد؟"

result = qa_pipeline(question=question, context=context)
print(f"Answer: {result['answer']}")
# Output: ۱۳۳۷