medmcqa-hard / README.md
mkurman's picture
Update README.md
e9c74ee verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: input
      dtype: string
    - name: opa
      dtype: string
    - name: opb
      dtype: string
    - name: opc
      dtype: string
    - name: opd
      dtype: string
    - name: cop
      dtype: int64
    - name: choice_type
      dtype: string
    - name: exp
      dtype: string
    - name: subject_name
      dtype: string
    - name: topic_name
      dtype: string
    - name: output
      dtype: string
    - name: options
      dtype: string
    - name: letter
      dtype: string
    - name: incorrect_letters
      list: string
    - name: incorrect_answers
      list: string
    - name: single_incorrect_answer
      dtype: string
    - name: system_prompt
      dtype: string
    - name: messages
      list:
        - name: content
          dtype: string
        - name: role
          dtype: string
  splits:
    - name: train
      num_bytes: 221816870
      num_examples: 164539
    - name: test
      num_bytes: 24647517
      num_examples: 18283
  download_size: 144137775
  dataset_size: 246464387
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
dataset_name: mkurman/medmcqa-hard
license: cc
language:
  - en
task_categories:
  - multiple-choice
  - question-answering
  - reinforcement-learning
tags:
  - medical
  - MCQ
  - evaluation
  - SFT
  - DPO
  - RL
pretty_name: MedMCQA-Hard
size_categories:
  - 10k<n<1M

medmcqa-hard

A harder, de-duplicated remix of MedMCQA designed to reduce memorization and strengthen medical MCQ generalization.

Why “hard”?

  • Answer list variants: Each correct option appears in multiple phrasing/list variants (e.g., reordered enumerations, equivalent wording), so models can’t rely on surface-form recall and must reason over content.
  • RL-friendly targets: Every item includes one canonical correct answer and both single and set of incorrect answers → plug-and-play for DPO, RLAIF/GRPO, and contrastive objectives.
  • Chat formatting: Adds lightweight messages (and optional system_prompt) not present in the original dataset, making it convenient for instruction-tuned models and SFT.

Intended uses

  • Robust eval of medical QA beyond memorization.
  • SFT with chat-style prompts.
  • DPO / other RL setups using single_incorrect_answer or incorrect_answers.

Data schema (fields)

  • question: str
  • options: list[str] (usually 4)
  • letter: str (A/B/C/D)
  • cop: int (0-based index of correct option)
  • incorrect_answers: list[str]
  • single_incorrect_answer: str
  • messages: list[{role: "system"|"user"|"assistant", content: str}]
  • system_prompt: str (optional)

Example

{
  "question": "Which of the following is true about …?",
  "options": ["A …", "B …", "C …", "D …"],
  "letter": "C",
  "cop": 2,
  "incorrect_answers": ["A …", "B …", "D …"],
  "single_incorrect_answer": "B …",
  "messages": [
    {"role":"system","content":"You are a medical tutor."},
    {"role":"user","content":"Q: Which of the following…?\nA) …\nB) …\nC) …\nD) …"}
  ]
}

Source & attribution

Derived from MedMCQA (Pal, Umapathi, Sankarasubbu; CHIL 2022). Please cite the original dataset/paper when using this work.

Safety note: Research/education only. Not for clinical use.