File size: 5,436 Bytes
c8eff55 8741a89 c8eff55 e51b0db c8eff55 5fbdfe9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 | ---
license: mit
---
# PerMedCQA: Persian Medical Consumer QA Benchmark
**PerMedCQA: Benchmarking Large Language Models on Medical Consumer Question Answering in Persian**
PerMedCQA is the first large-scale, real-world benchmark for Persian-language medical consumer question answering. It contains anonymized medical inquiries from Persian-speaking users paired with professional responses, enabling rigorous evaluation of large language models in low-resource, health-related domains.
---
## 📊 Dataset Overview
- **Total entries**: 68,138 QA pairs
- **Source platforms**: DrYab, HiSalamat, GetZoop, Mavara-e-Teb
- **Timeframe**: Nov 10, 2022 – Apr 2, 2024
- **Languages**: Persian only
- **Licensing**: [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
-
- ## 🔗 Paper
- 📄 [Paper on arXiv](https://arxiv.org/abs/2505.18331)
- 📊 [Paper with Code page](https://paperswithcode.com/paper/permedcqa-benchmarking-large-language-models)
---
## 🧬 Metadata & Features
Each example in the dataset includes:
- `instance_id`: Unique ID for each QA pair
- `Title`: Short user-submitted title
- `Question`: Full Persian-language consumer medical question
- `Expert_Answer`: Doctor’s response
- `Category`: Medical topic (e.g., “پوست و مو”)
- `Specialty`: Expert’s medical field (e.g., “متخصص پوست و مو”)
- `Age`: Reported patient age
- `Weight`: Reported weight (optional)
- `Sex`: Patient gender (`"man"` or `"woman"`)
- `dataset_source`: Name of the platform (e.g., DrYab, Getzoop)
- `Tag`: ICD‑11 label and rationale
- `QuestionType`: Question classification tag (e.g., "Contraindication", "Indication") and reasoning
---
## 📁 Dataset Structure
```json
{
"Title": "قرمزی پوست نوزاد بعد از استفاده از پماد",
"Category": "پوست و مو",
"Specialty": "متخصص پوست و مو",
"Age": "1",
"Weight": "10",
"Sex": "man",
"dataset_source": "HiSalamat",
"instance_id": 32405,
"Tag": {
"Tag": 23,
"Tag_Reasoning": "The question addresses a skin reaction in an infant following the application of a cream, indicating a dermatological condition."
},
"Question": "سلام خسته نباشید. من واسه پسر ۱ سالهام که جای واکسنش سفت شده بود، پماد موضعی استفاده کردم. ولی الان پوستش خیلی قرمز شده و خارش داره. ممکنه حساسیت داده باشه؟ باید چکار کنم؟",
"Expert_Answer": "احتمالا پوست نوزاد به ترکیبات پماد حساسیت نشان داده است. مصرف آن را قطع کنید و در صورت ادامه علائم به متخصص پوست مراجعه کنید.",
"QuestionType": {
"Explanation": "The user asks about an adverse skin reaction following the use of a topical medication on an infant, which is a case of possible side effects.",
"QuestionType_Tag": "SideEffect"
}
}
```
---
## 📥 How to Load
```python
from datasets import load_dataset
ds = load_dataset("NaghmehAI/PerMedCQA", split="train")
print(ds[0])
```
---
## 🚀 Intended Uses
- 🧠 **Evaluation** of multilingual or Persian-specific LLMs in real-world, informal medical domains
- 🛠️ **Few-shot or zero-shot** fine-tuning, instruction-tuning
- 🌍 **Cultural insights**: Persian language behavior in health-related discourse
- ⚠️ **NOT for clinical use**: Informational and research purposes only
---
## ⚙️ Data Processing Pipeline
### Stage 1: Column Transformation (`change_columns.py`)
- Reads CSV/JSON input and transforms them into structured JSON format
- Handles single-turn, multi-turn, and multi-expert Q&A data
- Cleans and formats the text, removing unnecessary whitespace and newlines
- Creates a chat-style JSON file with `user` and `assistant` roles
### Stage 2: QA Preprocessing (`preprocess_for_qa.py`)
- Truncates multi-turn dialogues to first Q&A pair
- Removes:
- Empty or invalid messages
- Q&A pairs shorter than 3 words
- Duplicate Q&A instances
- Adds:
- `dataset_source` and `instance_id` to each item
- Merges cleaned records into `All_QA_preprocessed.json`
### Dataset Cleaning Results:
| Dataset | Step 1 Removed | Step 2 Removed | Step 3 Removed | Final Records |
|--------------|----------------|----------------|----------------|----------------|
| Dr_Yab | 63 | 1083 | 25 | 37,905 |
| GetZoop | 1005 | 1352 | 8 | 25,502 |
| Hi-Salamat | 9580 | 121 | 1 | 5,220 |
| Mavara-e-Teb | 0 | 1034 | 92 | 4,789 |
---
## 📚 Citation
If you use **PerMedCQA**, please cite:
```bibtex
@misc{jamali2025permedcqa,
title={PerMedCQA: Benchmarking Large Language Models on Medical Consumer Question Answering in Persian Language},
author={Jamali, Naghmeh and Mohammadi, Milad and Baledi, Danial and Rezvani, Zahra and Faili, Heshaam},
year={2025},
eprint={2505.18331},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.18331}
}
```
---
## 📬 Contact
For collaboration or questions:
📧 Naghmeh Jamali – naghme.jamali.ai@gmail.com, Milad Mohammadi – miladmohammadi@ut.ac.ir, Danial Baledi – baledi.danial@gmail.com.
|