File size: 6,417 Bytes
ca087e7
 
 
30a660d
 
 
 
 
 
 
 
 
 
 
 
ca087e7
30a660d
ca087e7
30a660d
ca087e7
30a660d
 
 
ca087e7
30a660d
 
 
b8af477
 
30a660d
 
 
 
 
 
 
 
 
 
 
 
 
ca087e7
 
 
 
30a660d
ca087e7
 
 
 
 
 
30a660d
ca087e7
 
 
 
 
 
 
30a660d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ca087e7
30a660d
ca087e7
30a660d
ca087e7
 
30a660d
ca087e7
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
license: apache-2.0
---
# [ 🎯 NeurIPS 2025 ] 3D-RAD 🩻: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks
<div align="center">
  <a href="https://github.com/Tang-xiaoxiao/3D-RAD/stargazers">
    <img src="https://img.shields.io/github/stars/Tang-xiaoxiao/3D-RAD?style=social" />
  </a>
  <a href="https://arxiv.org/abs/2506.11147">
    <img src="https://img.shields.io/badge/arXiv-Paper-b31b1b.svg?logo=arxiv" />
  </a>
  <a href="https://GitHub.com/Naereen/StrapDown.js/graphs/commit-activity">
    <img src="https://img.shields.io/badge/Maintained%3F-yes-green.svg" />
  </a>
</div>

## πŸ“’ News

<summary><strong>What's New in This Update πŸš€</strong></summary>

- **2025.10.23**: πŸ”₯ Updated **the latest version** of the paper!
- **2025.09.19**: πŸ”₯ Paper accepted to **NeurIPS 2025**! 🎯
- **2025.05.16**: πŸ”₯ Set up the repository and committed the dataset!

## πŸ” Overview
πŸ’‘ In this repository, we present the dataset for **["3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks"](https://arxiv.org/pdf/2506.11147)**.

The code for our project is available on GitHub: [Tang-xiaoxiao/3D-RAD](https://github.com/Tang-xiaoxiao/3D-RAD)

In our project, we collect a large-scale dataset designed to advance 3D Med-VQA using radiology CT scans, 3D-RAD, encompasses six diverse VQA tasks: **Anomaly Detection** (task 1), **Image Observation** (task 2), **Medical Computation** (task 3), **Existence Detection** (task 4), **Static Temporal Diagnosis** (task 5), and **Longitudinal Temporal Diagnosis** (task 6). 

![overview](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/overview.png?raw=true)
![main](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/main.png?raw=true)

## πŸ“Š 3D-RAD Dataset
In the `3DRAD` directory, there are QA data without 3D images.
You can find the full dataset with 3D images (For efficient model input, the original CT images were preprocessed and converted into .npy format.) in [3D-RAD_Dataset](https://huggingface.co/datasets/Tang-xiaoxiao/3D-RAD).

![distribution](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/distribution.png?raw=true)
![construction](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/Construction.png?raw=true)

### πŸ“ Images/
This folder contains preprocessed 3D CT volumes in `.npy` format.  
Each file is structured to facilitate direct input into vision-language models.  
- Purpose: Standardized model input across all tasks.

### πŸ“ train/ and πŸ“ test/
These folders contain the question-answer (QA) pairs categorized by task.  
Each file corresponds to a specific QA task such as anomaly detection, measurement, or temporal reasoning.

- `train/`: QA pairs for model training  
- `test/`: QA pairs for model evaluation  

### Fields:
- `VolumeName`: File name of the associated CT volume (matches the file in `Images/`)  
- `Question`: The natural language question  
- `Answer`: The ground truth answer  
- `QuestionType`: Either `open` or `closed`  
- `AnswerChoice`: Correct option (A/B/C/D) for closed questions  
- `Choice A`–`Choice D`: Candidate options for closed questions  

## πŸ€– M3D-RAD Model
To assess the utility of 3D-RAD, we **finetuned two M3D model variants** with different parameter scales, thereby constructing the M3D-RAD models. You can find our finetuned model in [M3D-RAD_Models](https://huggingface.co/Tang-xiaoxiao/M3D-RAD).

![finetuned](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/finetuned.png?raw=true)

## πŸ“ˆ Evaluation

### Zero-Shot Evaluation.
We conducted **zero-shot evaluation** of several stateof-the-art 3D medical vision-language models on our benchmark to assess their generalization capabilities.

![zeroshot](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/zeroshot.png?raw=true)

In the `RadFM` and `M3D` directory, there are code for evaluating RadFM and M3D models on our 3D-RAD benchmark. Note that, the base code is [RadFM](https://github.com/chaoyi-wu/RadFM), and the base code is [M3D](https://github.com/BAAI-DCAI/M3D). To run our evaluation, you should first satisfy the requirements and download the models according to the base code of these models.

Compare to the base code, we make the following modifications: In the `RadFM` directory, we add a new Dataset in `RadFM/src/Dataset/dataset/rad_dataset.py` and modify the Dataset to test in `RadFM/src/Dataset/multi_dataset_test.py`. Then we add a new python file to evaluate our benchmark in `RadFM/src/eval_3DRAD.py`. In the `M3D` directory, we add a new Dataset in `M3D/Bench/dataset/multi_dataset.py` and add a new python file to evaluate our benchmark in `M3D/Bench/eval/eval_3DRAD.py`.

You can evaluate RadFM on our 3D-RAD benchmark by running:

```python
cd 3D-RAD/RadFM/src
python eval_3DRAD.py \
--file_path={your test file_path} \
--output_path={your saved output_path}
```

You can evaluate M3D on our 3D-RAD benchmark by running:

```python
cd 3D-RAD/M3D
python Bench/eval/eval_3DRAD.py \
--model_name_or_path={your model_name} \
--vqa_data_test_path={your test file_path} \
--output_dir={your saved output_dir}
```

### Scaling with Varying Training Set Sizes.
To further investigate the impact of dataset scale on model performance, we randomly **sampled 1%, 10% and 100%** of the training data per task and fine-tuned M3D accordingly. 

![varysizes](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/varysizes.png?raw=true)

## πŸ“ Data Source
The original CT scans in our dataset are derived from [CT-RATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE), which is released under a CC-BY-NC-SA license. We fully comply with the license terms by using the data for non-commercial academic research, providing proper attribution.

## πŸ”— Model Links

| Model | Paper                                                        |
| ----- | ------------------------------------------------------------ |
| [RadFM](https://github.com/chaoyi-wu/RadFM) | Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical Data | https://github.com/chaoyi-wu/RadFM |
| [M3D](https://github.com/BAAI-DCAI/M3D)   | M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models |
| OmniV(not open) | OmniV-Med: Scaling Medical Vision-Language Model for Universal Visual Understanding |