Datasets:
license: mit
task_categories:
- question-answering
- multiple-choice
language:
- ja
- en
tags:
- medical
- healthcare
- multimodal
- licensing-exam
- benchmark
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: JA
path: data/train_ja.jsonl
- split: EN
path: data/train_en.jsonl
- split: mixed
path: data/train_mixed.jsonl
KokushiMD-10: Benchmark for Evaluating Large Language Models on Ten Japanese National Healthcare Licensing Examinations
Overview
KokushiMD-10 is the first comprehensive multimodal benchmark constructed from ten Japanese national healthcare licensing examinations. This dataset addresses critical gaps in existing medical AI evaluation by providing a linguistically grounded, multimodal, and multi-profession assessment framework for large language models (LLMs) in healthcare contexts.
The dataset is available in multiple splits:
- JA split: Original Japanese language questions only
- EN split: English translated questions only
- mixed split: Both Japanese and English questions combined
You can load specific language splits using:
from datasets import load_dataset
# Load Japanese questions only
dataset_ja = load_dataset("humanalysis-square/KokushiMD-10", split="JA")
# Load English questions only
dataset_en = load_dataset("humanalysis-square/KokushiMD-10", split="EN")
# Load both languages mixed
dataset_mixed = load_dataset("humanalysis-square/KokushiMD-10", split="mixed")
# Load all splits
dataset = load_dataset("humanalysis-square/KokushiMD-10")
# Access individual splits: dataset["JA"], dataset["EN"], dataset["mixed"]
Key Features
- Multi-Professional Coverage: Spans 10 healthcare professions including Medicine, Dentistry, Nursing, Pharmacy, and allied health specialties
- Multimodal Questions: Contains both text-only and image-based questions (radiographs, clinical photographs, etc.)
- Large Scale: Over 11,588 real examination questions from official licensing exams
- Expert Annotations: Six professions include detailed Chain-of-Thought (CoT) explanations
- Recent Data: Covers examinations from 2020-2024
- Japanese Language: Addresses the linguistic gap in medical QA benchmarks
Dataset Composition
Healthcare Professions Included
- Medicine (医師) - Physician Licensing Examination
- Dentistry (歯科医師) - Dental Licensing Examination
- Nursing (看護師) - Registered Nurse Examination
- Pharmacy (薬剤師) - Pharmacist Licensing Examination
- Midwifery (助産師) - Midwife Licensing Examination
- Public Health Nursing (保健師) - Public Health Nurse Examination
- Physical Therapy (理学療法士) - Physical Therapist Examination
- Occupational Therapy (作業療法士) - Occupational Therapist Examination
- Optometry (視能訓練士) - Certified Orthoptist Examination
- Radiologic Technology (診療放射線技師) - Radiologic Technologist Examination
Question Types
- Single-choice questions: Traditional multiple-choice with one correct answer
- Multiple-choice questions: Questions requiring selection of multiple correct answers
- Calculation questions: Numerical problem-solving tasks
- Fill-in-the-blank: Questions requiring specific term completion
Multimodal Content
- Text-only questions: Traditional question-answer pairs
- Image-based questions: Questions incorporating clinical images, radiographs, charts, and diagrams
- Mixed modality: Questions combining textual descriptions with visual information
Dataset Statistics
- Total Questions: 11,588+ per language (23,176+ in mixed split)
- Text-only Questions: ~8,500 per language
- Image-based Questions: ~3,000 per language
- Professions with CoT Explanations: 6 out of 10
- Time Period: 2020-2024 examinations
- Languages: Japanese (JA split) and English (EN split)
Data Structure
folder tree
KokushiMD-10/
├── data/
│ ├── train_mixed.jsonl # Mixed dataset (both JA and EN)
│ ├── train_ja.jsonl # Japanese questions only
│ ├── train_en.jsonl # English questions only
│ ├── all_data_ja.json # Japanese backup (JSON format)
│ └── all_data_en.json # English backup (JSON format)
├── exams/
│ ├── EN/ # English version
│ └── JA/ # Japanese exam data
│ ├── 医師/ # Medicine
│ │ ├── 医師_{year}_{section}.json
│ │ └── ...
│ ├── 歯科/ # Dentistry
│ │ ├── 歯科_{year}_{section}.json
│ │ └── ...
│ │──...
├── CoT/ # Chain-of-Thought explanations
├── EN/ # English version
└── JA/ # Same structure as exams/JA/
├── 医師/
│ ├── 医師_{year}_{section}.json
│ └── ...
├── 歯科/
├── 看護/
├── 薬剤/
├── 助産/
└── 保健/
IMAGES are provided here.
Each example in the exams folder contains:
{
"year": "year of the exam",
"section": "section of the question",
"index": "question index",
"question": "Question text",
"a": "Choice A",
"b": "Choice B",
"c": "Choice C",
"d": "Choice D",
"e": "Choice E",
"1": "Choice 1, some exams are using numeric choices",
"2": "Choice 2",
"3": "Choice 3",
"4": "Choice 4",
"5": "Choice 5",
"answer": "correct answer to the question. when answer=0, any non-empty answer should be juged as correct",
"answer_1": "another version of correct answer when more than one is provided",
"answer_n": "more answer versions",
"points": "score for this question",
"human_accuracy": "only available for medicine and Dentistry",
"img": "images in appendix",
"content_fig": "images for background shared by multiple questions",
"question_fig": "images in the question part",
"answer_fig": "images in the choices",
"text_only": "bool, whether the question is text-only or multimodal",
"answer_sub": "General category for the question",
"answer_sub2": "sub category for the question",
"kinki": ["forbidden choices will be here"],
}
Each example in the CoT records contains:
{
"year": "year of exam",
"section": "exam section",
"index": "index of the question",
"theme_explanation": "A summarized explanation of for the question",
"choices_explanation": "explanation for why each choice is correct or not"
}
Evaluation Protocol
Metrics
- Accuracy: Percentage of correctly answered questions
- Pass Rate: Percentage of exams where passing threshold is achieved
- Cross-domain Generalization: Performance consistency across professions
Benchmark Version History
v1.1 (2025/09/08)
- Added support for multiple correct answer versions per question.
- Fixed formatting errors in some answer fields.
- Expanded the number of choices per question to a maximum of 9, ensuring all options are included for questions with more than 5 choices.
- Corrected some question IDs for consistency.
- Completed missing English translations.
v1.0 (2025/06/22)
- Initial public release of KokushiMD-10 benchmark covering 10 Japanese national healthcare licensing exams.
Citation
If you use this dataset in your research, please cite:
@article{liu2025kokushimd10,
title={KokushiMD-10: Benchmark for Evaluating Large Language Models on Ten Japanese National Healthcare Licensing Examinations},
author={Liu, Junyu and Yan, Kaiqi and Wang, Tianyang and Niu, Qian and Nagai-Tanima, Momoko and Aoyama, Tomoki},
journal={arXiv preprint arXiv:2506.11114},
year={2025}
}
License
This dataset is released under the MIT License. See LICENSE for details.
Data Source
The dataset is constructed from official Japanese national healthcare licensing examinations published by the Ministry of Health, Labour and Welfare of Japan between 2020-2024.
Ethical Considerations
- All questions are from publicly available official examinations
- No patient privacy concerns as images are educational/examination materials
- Dataset intended for research and educational purposes
- Should not be used as a substitute for professional medical advice
Acknowledgments
We thank the Ministry of Health, Labour and Welfare of Japan for making the examination materials publicly available, and all healthcare professionals who contributed to the creation of these rigorous assessments.
Disclaimer: This dataset is for research and educational purposes only. It should not be used for clinical decision-making or as a substitute for professional medical judgment.