Tang-xiaoxiao commited on
Commit
30a660d
Β·
verified Β·
1 Parent(s): ca087e7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -13
README.md CHANGED
@@ -1,27 +1,55 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- # 3D-RAD
5
- The official Dataset for the paper "3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks".
 
 
 
 
 
 
 
 
 
 
6
 
7
- In our project, we collect a large-scale dataset designed to advance 3D Med-VQA using radiology CT scans, 3D-RAD, encompasses six diverse VQA tasks: anomaly detection (task 1), image observation (task 2), medical computation (task 3), existence detection (task 4), static temporal diagnosis (task 5), and longitudinal temporal diagnosis (task 6).
8
 
9
- ![Main Figure](https://github.com/Tang-xiaoxiao/M3D-RAD/blob/main/Figures/main.png?raw=true)
10
 
 
 
 
11
 
12
- ## πŸ“ Images/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  This folder contains preprocessed 3D CT volumes in `.npy` format.
14
  Each file is structured to facilitate direct input into vision-language models.
15
  - Purpose: Standardized model input across all tasks.
16
 
17
- ## πŸ“ train/ and πŸ“ test/
18
  These folders contain the question-answer (QA) pairs categorized by task.
19
  Each file corresponds to a specific QA task such as anomaly detection, measurement, or temporal reasoning.
20
 
21
  - `train/`: QA pairs for model training
22
  - `test/`: QA pairs for model evaluation
23
 
24
- # Fields:
25
  - `VolumeName`: File name of the associated CT volume (matches the file in `Images/`)
26
  - `Question`: The natural language question
27
  - `Answer`: The ground truth answer
@@ -29,16 +57,50 @@ Each file corresponds to a specific QA task such as anomaly detection, measureme
29
  - `AnswerChoice`: Correct option (A/B/C/D) for closed questions
30
  - `Choice A`–`Choice D`: Candidate options for closed questions
31
 
32
- ## Code
33
- You can find our code in [M3D-RAD_Code](https://github.com/Tang-xiaoxiao/M3D-RAD).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
- ## M3D-RAD Model
36
- You can find our model in [M3D-RAD_Models](https://huggingface.co/Tang-xiaoxiao/M3D-RAD).
37
 
38
- ## Data Source
39
  The original CT scans in our dataset are derived from [CT-RATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE), which is released under a CC-BY-NC-SA license. We fully comply with the license terms by using the data for non-commercial academic research, providing proper attribution.
40
 
41
- ## Model Links
42
 
43
  | Model | Paper |
44
  | ----- | ------------------------------------------------------------ |
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # [ 🎯 NeurIPS 2025 ] 3D-RAD 🩻: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks
5
+ <div align="center">
6
+ <a href="https://github.com/Tang-xiaoxiao/3D-RAD/stargazers">
7
+ <img src="https://img.shields.io/github/stars/Tang-xiaoxiao/3D-RAD?style=social" />
8
+ </a>
9
+ <a href="https://arxiv.org/abs/2506.11147">
10
+ <img src="https://img.shields.io/badge/arXiv-Paper-b31b1b.svg?logo=arxiv" />
11
+ </a>
12
+ <a href="https://GitHub.com/Naereen/StrapDown.js/graphs/commit-activity">
13
+ <img src="https://img.shields.io/badge/Maintained%3F-yes-green.svg" />
14
+ </a>
15
+ </div>
16
 
17
+ ## πŸ“’ News
18
 
19
+ <summary><strong>What's New in This Update πŸš€</strong></summary>
20
 
21
+ - **2025.10.23**: πŸ”₯ Updated **the latest version** of the paper!
22
+ - **2025.09.19**: πŸ”₯ Paper accepted to **NeurIPS 2025**! 🎯
23
+ - **2025.05.16**: πŸ”₯ Set up the repository and committed the dataset!
24
 
25
+ ## πŸ” Overview
26
+ πŸ’‘ In this repository, we present the dataset for **["3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks"](https://arxiv.org/pdf/2506.11147)**.
27
+
28
+ In our project, we collect a large-scale dataset designed to advance 3D Med-VQA using radiology CT scans, 3D-RAD, encompasses six diverse VQA tasks: **Anomaly Detection** (task 1), **Image Observation** (task 2), **Medical Computation** (task 3), **Existence Detection** (task 4), **Static Temporal Diagnosis** (task 5), and **Longitudinal Temporal Diagnosis** (task 6).
29
+
30
+ ![overview](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/overview.png?raw=true)
31
+ ![main](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/main.png?raw=true)
32
+
33
+ ## πŸ“Š 3D-RAD Dataset
34
+ In the `3DRAD` directory, there are QA data without 3D images.
35
+ You can find the full dataset with 3D images (For efficient model input, the original CT images were preprocessed and converted into .npy format.) in [3D-RAD_Dataset](https://huggingface.co/datasets/Tang-xiaoxiao/3D-RAD).
36
+
37
+ ![distribution](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/distribution.png?raw=true)
38
+ ![construction](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/Construction.png?raw=true)
39
+
40
+ ### πŸ“ Images/
41
  This folder contains preprocessed 3D CT volumes in `.npy` format.
42
  Each file is structured to facilitate direct input into vision-language models.
43
  - Purpose: Standardized model input across all tasks.
44
 
45
+ ### πŸ“ train/ and πŸ“ test/
46
  These folders contain the question-answer (QA) pairs categorized by task.
47
  Each file corresponds to a specific QA task such as anomaly detection, measurement, or temporal reasoning.
48
 
49
  - `train/`: QA pairs for model training
50
  - `test/`: QA pairs for model evaluation
51
 
52
+ ### Fields:
53
  - `VolumeName`: File name of the associated CT volume (matches the file in `Images/`)
54
  - `Question`: The natural language question
55
  - `Answer`: The ground truth answer
 
57
  - `AnswerChoice`: Correct option (A/B/C/D) for closed questions
58
  - `Choice A`–`Choice D`: Candidate options for closed questions
59
 
60
+ ## πŸ€– M3D-RAD Model
61
+ To assess the utility of 3D-RAD, we **finetuned two M3D model variants** with different parameter scales, thereby constructing the M3D-RAD models. You can find our finetuned model in [M3D-RAD_Models](https://huggingface.co/Tang-xiaoxiao/M3D-RAD).
62
+
63
+ ![finetuned](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/finetuned.png?raw=true)
64
+
65
+ ## πŸ“ˆ Evaluation
66
+
67
+ ### Zero-Shot Evaluation.
68
+ We conducted **zero-shot evaluation** of several stateof-the-art 3D medical vision-language models on our benchmark to assess their generalization capabilities.
69
+
70
+ ![zeroshot](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/zeroshot.png?raw=true)
71
+
72
+ In the `RadFM` and `M3D` directory, there are code for evaluating RadFM and M3D models on our 3D-RAD benchmark. Note that, the base code is [RadFM](https://github.com/chaoyi-wu/RadFM), and the base code is [M3D](https://github.com/BAAI-DCAI/M3D). To run our evaluation, you should first satisfy the requirements and download the models according to the base code of these models.
73
+
74
+ Compare to the base code, we make the following modifications: In the `RadFM` directory, we add a new Dataset in `RadFM/src/Dataset/dataset/rad_dataset.py` and modify the Dataset to test in `RadFM/src/Dataset/multi_dataset_test.py`. Then we add a new python file to evaluate our benchmark in `RadFM/src/eval_3DRAD.py`. In the `M3D` directory, we add a new Dataset in `M3D/Bench/dataset/multi_dataset.py` and add a new python file to evaluate our benchmark in `M3D/Bench/eval/eval_3DRAD.py`.
75
+
76
+ You can evaluate RadFM on our 3D-RAD benchmark by running:
77
+
78
+ ```python
79
+ cd 3D-RAD/RadFM/src
80
+ python eval_3DRAD.py \
81
+ --file_path={your test file_path} \
82
+ --output_path={your saved output_path}
83
+ ```
84
+
85
+ You can evaluate M3D on our 3D-RAD benchmark by running:
86
+
87
+ ```python
88
+ cd 3D-RAD/M3D
89
+ python Bench/eval/eval_3DRAD.py \
90
+ --model_name_or_path={your model_name} \
91
+ --vqa_data_test_path={your test file_path} \
92
+ --output_dir={your saved output_dir}
93
+ ```
94
+
95
+ ### Scaling with Varying Training Set Sizes.
96
+ To further investigate the impact of dataset scale on model performance, we randomly **sampled 1%, 10% and 100%** of the training data per task and fine-tuned M3D accordingly.
97
 
98
+ ![varysizes](https://github.com/Tang-xiaoxiao/3D-RAD/blob/main/Figures/varysizes.png?raw=true)
 
99
 
100
+ ## πŸ“ Data Source
101
  The original CT scans in our dataset are derived from [CT-RATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE), which is released under a CC-BY-NC-SA license. We fully comply with the license terms by using the data for non-commercial academic research, providing proper attribution.
102
 
103
+ ## πŸ”— Model Links
104
 
105
  | Model | Paper |
106
  | ----- | ------------------------------------------------------------ |