amega-benchmark / README.md
row56's picture
Update README.md
bfef2e8 verified
metadata
license: apache-2.0
language:
  - en
tags:
  - medical
  - benchmark
  - evaluation
  - clinical-guidelines
  - diagnostic-reasoning
pretty_name: AMEGA-LLM Benchmark
task_categories:
  - question-answering
configs:
  - config_name: cases
    data_files: cases.csv
    sep: ;
    default: true
  - config_name: questions
    data_files: questions.csv
    sep: ;
  - config_name: sections
    data_files: sections.csv
    sep: ;
  - config_name: criteria
    data_files: criteria.csv
    sep: ;
extra_gated_heading: Evaluation-only access
extra_gated_description: >-
  Do NOT use this dataset for model training, pretraining, fine-tuning, or data
  augmentation.
extra_gated_prompt: 'By requesting access, you agree to all of the following:'
extra_gated_button_content: Agree and request access
extra_gated_fields:
  I will not use this dataset for training/pretraining/fine-tuning/data augmentation: checkbox
  I will not redistribute the Q/A content: checkbox
  Intended use (one line): text
  Affiliation (optional): text

AMEGA-LLM Benchmark

20 guideline-based clinical cases across 13 specialties with open-ended questions and a detailed rubric (1,337 criteria) to evaluate LLM medical reasoning and adherence to clinical guidelines.

Evaluation-only — do not use for training.

Files / configs

  • cases – 20 case narratives + metadata
  • questions – all questions per case
  • sections – rubric sections (with point weights)
  • criteria – fine-grained checklist items

Quick start

from datasets import load_dataset

# Load the cases table
cases = load_dataset("row56/amega-benchmark", "cases")
print(cases["train"][0])

# Load other tables
questions = load_dataset("row56/amega-benchmark", "questions")
sections = load_dataset("row56/amega-benchmark", "sections")
criteria  = load_dataset("row56/amega-benchmark", "criteria")

Intended use & canary

This dataset is intended for benchmarking/evaluation of LLM clinical reasoning and guideline adherence. Do not fine-tune or train models on this dataset.
A unique canary marker is embedded in the repository to help detect misuse; please leave it intact and do not reproduce it in documentation or prompts.

Citation

Fast et al., npj Digital Medicine (2024).

@article{fast2024amega,
  title={Autonomous Medical Evaluation for Guideline Adherence of LLMs},
  journal={npj Digital Medicine},
  year={2024}
}

Version

  • v1.0 — initial release (Aug 10, 2025)

Links