license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
EHR-R1: A Reasoning-Enhanced Foundational Language Model for Electronic Health Record Analysis
This repository contains the EHR-R1-8B model, part of the EHR-R1 series, a family of reasoning-enhanced Large Language Models (LLMs) tailored for Electronic Health Record (EHR) analysis. This work was presented in the paper EHR-R1: A Reasoning-Enhanced Foundational Language Model for Electronic Health Record Analysis.
EHR-R1 aims to bridge the gap in existing LLMs for EHR analysis, which often face limitations due to narrow task coverage and a lack of EHR-oriented reasoning capabilities. This model series leverages EHR-Ins, a large-scale, comprehensive EHR reasoning instruction dataset, comprising 300k high-quality reasoning cases and 4M non-reasoning cases across 42 distinct EHR tasks. It employs a thinking-graph-driven framework to generate high-quality reasoning data at scale.
Through a multi-stage training paradigm, including domain adaptation, reasoning enhancement, and reinforcement learning, EHR-R1 systematically acquires domain knowledge and diverse reasoning capabilities, enabling accurate and robust EHR analysis. The models are evaluated on EHR-Bench, a new benchmark curated from MIMIC-IV, spanning 42 tasks.
For more details, including other models in the EHR-R1 series (EHR-R1-1.7B, EHR-R1-72B), datasets, and training scripts, please refer to the official GitHub repository.
💡 Key Highlights
- We open-source a large-scale instruction dataset EHR-Ins, including 3.5M non-reasoning data and 300k reasoning data.
- We open-source a comprehensive benchmark EHR-Bench, which covers 42 distinct EHR analysis tasks.
- We open-source EHR reasoning-enhanced LLMs EHR-R1, including EHR-R1-1.7B, EHR-R1-8B, and EHR-R1-72B
- We open-source the "thinking-graph" pipeline, which can synthesize reasoning chains for EHR analysis tasks according to the relation of EHR entities.
Usage with Hugging Face Transformers
To use the EHR-R1-8B model for inference with the transformers library, follow the example below.
EHR Input Format
For any EHR data, keep the EHR input with markdown format as below:
- For the event with single record:
## Evant Name [Event Time (YYYY-MM-DD HH:MM:SS)]
- ItemKey_1: ItemValue_1
- ItemKey_2: ItemValue_2
- ItemKey_3: ItemValue_3
- For the event with multiple records (like labevents):
## Evant Name [Event Time (YYYY-MM-DD HH:MM:SS)]
| ItemKey_1 | ItemKey_2 | ItemKey_3 |
| --------- | --------- | --------- |
| ItemValue_1 | ItemValue_2 | ItemValue_3 |
| ItemValue_1 | ItemValue_2 | ItemValue_3 |
| ItemValue_1 | ItemValue_2 | ItemValue_3 |
Inference Example
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "BlueZeros/EHR-R1-8B" # Path to EHR-R1-8B model on Hugging Face
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
ehr_input = """
## Admission
- Admit Source: Emergency Room
- Admitted From: Emergency Room
- Admission Type: Emergency
- Hospital Admission Date: 2167-12-07 00:00:00
- Hospital Discharge Date: 2167-12-10 00:00:00
## Diagnosis [2167-12-07 10:00:00]
- Principal Diagnosis: Acute myocardial infarction
- Secondary Diagnosis: Type 2 diabetes mellitus
## Lab Results [2167-12-07 11:30:00]
| Test Name | Result | Unit | Reference Range |
| --------- | ------ | ---- | --------------- |
| Troponin I | 12.5 | ng/mL | < 0.04 |
| Glucose | 250 | mg/dL | 70 - 100 |
"""
instruction = "What is the primary diagnosis for this patient based on the provided EHR data?"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": ehr_input + "
" + instruction}
]
# For EHR-R1-1.7B & EHR-R1-8B, control the reasoning mode by setting enable_thinking
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False, # Set to False to disable thinking mode for 1.7B/8B models
)
# For EHR-R1-72B, you can manually add <think>
</think>
at the end of the model_inputs to close the reasoning modes.
text += "<think>
</think>
" # Manually close reasoning mode for EHR-R1-72B or when enable_thinking=False
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=2048,
temperature=0.0,
do_sample=False, # For deterministic output
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Citation
If you find our work helpful or inspiring, please feel free to cite it:
@article{liao2025ehrr1,
title={EHR-R1: A Reasoning-Enhanced Foundational Language Model for Electronic Health Record Analysis},
author={Liao, Yusheng and Wu, Chaoyi and Liu, Junwei and Jiang, Shuyang and Qiu, Pengcheng and Wang, Haowen and Yue, Yun and Zhen, Shuai and Wang, Jian and Fan, Qianrui and Gu, Jinjie and Zhang, Ya and Wang, Yanfeng and Wang, Yu and Xie, Weidi},
journal={arXiv preprint arXiv:2510.25628},
year={2025}
}