--- task_categories: - table-question-answering language: - en tags: - medical size_categories: - 10K EHR-R1 Teaser Image

## Structure Each item in the `jsonl` file contains the key as below: * **idx**: Unique ID for each sample * **instruction**: Task instruction; the instruction is the same if the task is the same * **input**: EHR input after text serialization * **output**: Output used for training (this item is not useful for the test set) * **candidates**: Candidate options provided for the untrained model * **task_info**: Task-related information is included in this item, including: * **target_key**: The column name from the EHR used to retrieve the prediction label; this item is `None` for the `risk prediction` task * **events**: Event types related to the prediction label * **metric**: The metric used to calculate the score for this task * **target**: The raw label in string format * **label**: The label used to calculate the score > To prevent the leakage of native data information within the MIMIC-IV dataset, we removed information such as subject_id, harm_id, and other details that might link to the original MIMIC-IV data. The complete dataset can be found in MIMIC-IV-Ext-EHR-Analysis on PhysioNet (not yet released). ## 📖 Citation If you find our work helpful or inspiring, please feel free to cite it: ```bib @article{liao2025ehrr1, title={{EHR-R1: A Reasoning-Enhanced Foundational Language Model for Electronic Health Record Analysis}}, author={Liao, Yusheng and Wu, Chaoyi and Liu, Junwei and Jiang, Shuyang and Qiu, Pengcheng and Wang, Haowen and Yue, Yun and Zhen, Shuai and Wang, Jian and Fan, Qianrui and Gu, Jinjie and Zhang, Ya and Wang, Yanfeng and Wang, Yu and Xie, Weidi}, journal={arXiv preprint arXiv:2510.25628}, year={2025} } ```