File size: 3,332 Bytes
1b4b949 e9c74ee 1b4b949 e9c74ee | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 | ---
dataset_info:
features:
- name: id
dtype: string
- name: input
dtype: string
- name: opa
dtype: string
- name: opb
dtype: string
- name: opc
dtype: string
- name: opd
dtype: string
- name: cop
dtype: int64
- name: choice_type
dtype: string
- name: exp
dtype: string
- name: subject_name
dtype: string
- name: topic_name
dtype: string
- name: output
dtype: string
- name: options
dtype: string
- name: letter
dtype: string
- name: incorrect_letters
list: string
- name: incorrect_answers
list: string
- name: single_incorrect_answer
dtype: string
- name: system_prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 221816870
num_examples: 164539
- name: test
num_bytes: 24647517
num_examples: 18283
download_size: 144137775
dataset_size: 246464387
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_name: mkurman/medmcqa-hard
license: cc
language:
- en
task_categories:
- multiple-choice
- question-answering
- reinforcement-learning
tags:
- medical
- MCQ
- evaluation
- SFT
- DPO
- RL
pretty_name: MedMCQA-Hard
size_categories:
- 10k<n<1M
---
# medmcqa-hard
**A harder, de-duplicated remix of MedMCQA** designed to reduce memorization and strengthen medical MCQ generalization.
## Why “hard”?
* **Answer list variants:** Each correct option appears in **multiple phrasing/list variants** (e.g., reordered enumerations, equivalent wording), so models can’t rely on surface-form recall and must reason over content.
* **RL-friendly targets:** Every item includes **one canonical correct answer** and both **single** and **set** of incorrect answers → plug-and-play for **DPO**, **RLAIF/GRPO**, and contrastive objectives.
* **Chat formatting:** Adds lightweight **`messages`** (and optional `system_prompt`) not present in the original dataset, making it convenient for instruction-tuned models and SFT.
## Intended uses
* Robust **eval** of medical QA beyond memorization.
* **SFT** with chat-style prompts.
* **DPO / other RL** setups using `single_incorrect_answer` or `incorrect_answers`.
## Data schema (fields)
* `question`: str
* `options`: list[str] (usually 4)
* `letter`: str (A/B/C/D)
* `cop`: int (0-based index of correct option)
* `incorrect_answers`: list[str]
* `single_incorrect_answer`: str
* `messages`: list[{role: "system"|"user"|"assistant", content: str}]
* `system_prompt`: str (optional)
### Example
```json
{
"question": "Which of the following is true about …?",
"options": ["A …", "B …", "C …", "D …"],
"letter": "C",
"cop": 2,
"incorrect_answers": ["A …", "B …", "D …"],
"single_incorrect_answer": "B …",
"messages": [
{"role":"system","content":"You are a medical tutor."},
{"role":"user","content":"Q: Which of the following…?\nA) …\nB) …\nC) …\nD) …"}
]
}
```
## Source & attribution
Derived from **MedMCQA** (Pal, Umapathi, Sankarasubbu; CHIL 2022). Please cite the original dataset/paper when using this work.
> **Safety note:** Research/education only. Not for clinical use.
|