Instructions to use newsha/PQuAD with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use newsha/PQuAD with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="newsha/PQuAD")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("newsha/PQuAD") model = AutoModelForQuestionAnswering.from_pretrained("newsha/PQuAD") - Notebooks
- Google Colab
- Kaggle
# Load model directly
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("newsha/PQuAD")
model = AutoModelForQuestionAnswering.from_pretrained("newsha/PQuAD")Quick Links
PQuAD: Persian Question Answering Model
This model is a fine-tuned version of ParsBERT (state-of-the-art Persian language model) for the task of Question Answering.
It was trained on a proprietary Persian QA dataset as part of a BSc thesis at Amirkabir University of Technology.
Model Details
- Base Model: ParsBERT (Hooshvare Lab)
- Task: Extractive Question Answering
- Language: Persian (Farsi)
- Framework: PyTorch & Transformers
How to Use
You can use this model directly with the Hugging Face pipeline:
from transformers import pipeline
# Load the pipeline
qa_pipeline = pipeline("question-answering", model="newsha/PQuAD")
context = "دانشگاه صنعتی امیرکبیر یکی از باسابقهترین دانشگاههای فنی ایران است که در سال ۱۳۳۷ در تهران تأسیس شد."
question = "دانشگاه امیرکبیر در چه سالی تأسیس شد؟"
result = qa_pipeline(question=question, context=context)
print(f"Answer: {result['answer']}")
# Output: ۱۳۳۷
- Downloads last month
- 15
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="newsha/PQuAD")