metadata
task_categories:
- table-question-answering
language:
- en
tags:
- medical
size_categories:
- 10K<n<100K
EHR-R1: A Reasoning-Enhanced Foundational Language Model for Electronic Health Record Analysis
This repository contains the EHR-Bench dataset, as presented in the paper EHR-R1: A Reasoning-Enhanced Foundational Language Model for Electronic Health Record Analysis.
EHR-Bench is a new, comprehensive benchmark introduced to rigorously evaluate Large Language Models (LLMs) on Electronic Health Record (EHR) analysis tasks.
- Source and Purpose: It is derived from the MIMIC-IV dataset and serves as the primary in-distribution benchmark for the EHR-R1 model. Its goal is to provide a balanced and comprehensive assessment of both reasoning and task-specific performance across a broad spectrum of clinically relevant settings, mirroring real-world EHR challenges.
- Composition: The benchmark spans 42 distinct tasks organized into two major types:
- Decision-Making Tasks (24 tasks): These are generative problems that require the model to recommend the next appropriate intervention given a specific medical event. They include tasks like diagnosis, treatment, and service recommendation.
- Risk-Prediction Tasks (18 tasks): These are binary classification problems where the model forecasts the occurrence of a significant medical event within a specified horizon. They cover subtypes such as mortality, readmission, and length of stay.
- Paper: https://huggingface.co/papers/2510.25628
- GitHub Repository: https://github.com/MAGIC-AI4Med/EHR-R1
Structure
Each item in the jsonl file contains the key as below:
- idx: Unique ID for each sample
- instruction: Task instruction; the instruction is the same if the task is the same
- input: EHR input after text serialization
- output: Output used for training (this item is not useful for the test set)
- candidates: Candidate options provided for the untrained model
- task_info: Task-related information is included in this item, including:
- target_key: The column name from the EHR used to retrieve the prediction label; this item is
Nonefor therisk predictiontask - events: Event types related to the prediction label
- metric: The metric used to calculate the score for this task
- target: The raw label in string format
- label: The label used to calculate the score
- target_key: The column name from the EHR used to retrieve the prediction label; this item is
To prevent the leakage of native data information within the MIMIC-IV dataset, we removed information such as subject_id, harm_id, and other details that might link to the original MIMIC-IV data. The complete dataset can be found in MIMIC-IV-Ext-EHR-Analysis on PhysioNet (not yet released).
📖 Citation
If you find our work helpful or inspiring, please feel free to cite it:
@article{liao2025ehrr1,
title={{EHR-R1: A Reasoning-Enhanced Foundational Language Model for Electronic Health Record Analysis}},
author={Liao, Yusheng and Wu, Chaoyi and Liu, Junwei and Jiang, Shuyang and Qiu, Pengcheng and Wang, Haowen and Yue, Yun and Zhen, Shuai and Wang, Jian and Fan, Qianrui and Gu, Jinjie and Zhang, Ya and Wang, Yanfeng and Wang, Yu and Xie, Weidi},
journal={arXiv preprint arXiv:2510.25628},
year={2025}
}