| | --- |
| | tags: |
| | - medical |
| | --- |
| | |
| | This is the official data repository for [RadIR: A Scalable Framework for Multi-Grained Medical Image Retrieval via Radiology Report Mining](https://www.arxiv.org/abs/2503.04653). |
| |
|
| | We mine image-paired report to extract findings on diverse anatomy structures, and quantify the multi-grained image-image relevance via [RaTEScore](https://arxiv.org/abs/2406.16845). |
| | Specifically, we have extended two public datasets for multi-grained medical image retrieval task: |
| | - MIMIC-IR is extended from [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/), containing 377,110 images and 90 anatomy structures. |
| | - CTRATE-IR is extended from [CTRATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE), containing 25,692 images and 48 anatomy structures. |
| |
|
| | **Note:** For the MIMIC-IR dataset, you need to manually merge and decompress the files. |
| | After downloading all split parts (from `MIMIC-IR.tar.gz.part00` to `MIMIC-IR.tar.gz.part08`), execute the following commands in the same directory: |
| | ``` |
| | cat MIMIC-IR.tar.gz.part* > MIMIC-IR.tar.gz |
| | tar xvzf MIMIC-IR.tar.gz |
| | ``` |
| |
|
| | A simple demo to read the data from CTRATE-IR: |
| | ```python |
| | import pandas as pd |
| | import numpy as np |
| | |
| | # CTRATE-IR |
| | anatomy_condition = 'bone' |
| | sample_A_idx = 10 |
| | sample_B_idx = 20 |
| | df = pd.read_csv(f'CTRATE-IR/anatomy/train_entity/{anatomy_condition}.csv') |
| | id_ls = df.iloc[:,0].tolist() |
| | findings_ls = df.iloc[:,1].tolist() |
| | |
| | simi_tab = np.load(f'CTRATE-IR/anatomy/train_ratescore/{anatomy_condition}.npy') |
| | |
| | # # MIMIC-IR |
| | # anatomy_condition = 'lungs' |
| | # sample_A_idx = 10 |
| | # sample_B_idx = 20 |
| | # df = pd.read_csv(f'MIMIC-IR/anatomy/train_caption/{anatomy_condition}.csv') |
| | # id_ls = df.iloc[:,0].tolist() |
| | # findings_ls = df.iloc[:,1].tolist() |
| | # simi_tab = np.load(f'MIMIC-IR/anatomy/train_ratescore/{anatomy_condition}.npy') |
| | |
| | print(f'Sample {id_ls[sample_A_idx]} findings on {anatomy_condition}: {findings_ls[sample_A_idx]}') |
| | print(f'Sample {id_ls[sample_B_idx]} findings on {anatomy_condition}: {findings_ls[sample_B_idx]}') |
| | print(f'Relevance score: {simi_tab[sample_A_idx, sample_B_idx]}') |
| | ``` |
| |
|
| | Note that the score have been normalized to 0~100 and stored in uint8. We also provide the whole image-level relevance quantified based on their entire reports: |
| | ```python |
| | import os |
| | import json |
| | import numpy as np |
| | |
| | sample_A_idx = 10 |
| | sample_B_idx = 20 |
| | |
| | # CTRATE-IR |
| | with open('CTRATE-IR/train_filtered.jsonl', 'r') as f: |
| | data = f.readlines() |
| | data = [json.loads(l) for l in data] |
| | simi_tab = np.load(f'CTRATE-IR/CT_train_ratescore.npy') |
| | sample_A_id = os.path.basename(data[sample_A_idx]['img_path']) |
| | sample_B_id = os.path.basename(data[sample_B_idx]['img_path']) |
| | sample_A_report = os.path.basename(data[sample_A_idx]['text']) |
| | sample_B_report = os.path.basename(data[sample_B_idx]['text']) |
| | |
| | # # MIMIC-IR |
| | # data = pd.read_csv('MIMIC-IR/val_caption.csv') |
| | # simi_tab = np.load('MIMIC-IR/val_ratescore.npy') |
| | # sample_A_id = data.iloc[sample_A_idx]['File Path'] |
| | # sample_B_id = data.iloc[sample_B_idx]['File Path'] |
| | # sample_A_report = data.iloc[sample_A_idx]['Findings'] |
| | # sample_B_report = data.iloc[sample_B_idx]['Findings'] |
| | |
| | print(f'Sample {sample_A_id} reports: {sample_A_report}\n') |
| | print(f'Sample {sample_B_id} reports: {sample_B_report}\n') |
| | print(f'Whole image relevance score: {simi_tab[sample_A_idx, sample_B_idx]}') |
| | ``` |
| |
|
| |
|
| |
|
| | For raw image data, you can download them from [CTRATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE) (or [RadGenome-ChestCT](https://huggingface.co/datasets/RadGenome/RadGenome-ChestCT)) and [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/). We keep all the sample id consistent so you can easily find them. |
| |
|
| |
|
| | **Citation** |
| |
|
| | If you find our data useful, please consider citing our work! |
| | ``` |
| | @article{zhang2025radir, |
| | title={RadIR: A Scalable Framework for Multi-Grained Medical Image Retrieval via Radiology Report Mining}, |
| | author={Zhang, Tengfei and Zhao, Ziheng and Wu, Chaoyi and Zhou, Xiao and Zhang, Ya and Wang, Yangfeng and Xie, Weidi}, |
| | journal={arXiv preprint arXiv:2503.04653}, |
| | year={2025} |
| | } |
| | ``` |