|
|
--- |
|
|
language: |
|
|
- fa |
|
|
size_categories: |
|
|
- n<1K |
|
|
task_categories: |
|
|
- visual-question-answering |
|
|
- multiple-choice |
|
|
- question-answering |
|
|
pretty_name: PerMed-MM |
|
|
|
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: test |
|
|
path: PerMed-MM.json |
|
|
|
|
|
tags: |
|
|
- medical |
|
|
--- |
|
|
|
|
|
# PerMed-MM: A Multimodal, Multi-Specialty Persian Medical Benchmark |
|
|
|
|
|
[**🤗 Dataset**](https://huggingface.co/datasets/universitytehran/PerMed-MM) | [**📖 Paper (Link)**] |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
**PerMed-MM** is the first multimodal, multi-specialty benchmark for evaluating Vision Language Models (VLMs) on **Persian** medical question answering. |
|
|
|
|
|
The dataset comprises **733 multiple-choice questions** sourced from the Iranian National Medical Board Exams. Each question is paired with **one to five clinically relevant images**, totaling **944 images**. The benchmark spans **46 medical specialties** and covers a wide range of visual modalities, including radiographic images, histopathology slides, dermatologic photographs, and ECG waveforms. |
|
|
|
|
|
|
|
|
|