File size: 2,682 Bytes
ca087e7
 
be8251c
 
ca087e7
be8251c
ca087e7
be8251c
ca087e7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be8251c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: apache-2.0
task_categories:
- image-text-to-text
---

# 3D-RAD
The official Dataset for the paper "[3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks](https://huggingface.co/papers/2506.11147)".

In our project, we collect a large-scale dataset designed to advance 3D Med-VQA using radiology CT scans, 3D-RAD, encompasses six diverse VQA tasks: anomaly detection (task 1), image observation (task 2), medical computation (task 3), existence detection (task 4), static temporal diagnosis (task 5), and longitudinal temporal diagnosis (task 6). 

![Main Figure](https://github.com/Tang-xiaoxiao/M3D-RAD/blob/main/Figures/main.png?raw=true)


## πŸ“ Images/
This folder contains preprocessed 3D CT volumes in `.npy` format.  
Each file is structured to facilitate direct input into vision-language models.  
- Purpose: Standardized model input across all tasks.

## πŸ“ train/ and πŸ“ test/
These folders contain the question-answer (QA) pairs categorized by task.  
Each file corresponds to a specific QA task such as anomaly detection, measurement, or temporal reasoning.

- `train/`: QA pairs for model training  
- `test/`: QA pairs for model evaluation  

# Fields:
- `VolumeName`: File name of the associated CT volume (matches the file in `Images/`)  
- `Question`: The natural language question  
- `Answer`: The ground truth answer  
- `QuestionType`: Either `open` or `closed`  
- `AnswerChoice`: Correct option (A/B/C/D) for closed questions  
- `Choice A`–`Choice D`: Candidate options for closed questions  

## Code
You can find our code in [M3D-RAD_Code](https://github.com/Tang-xiaoxiao/M3D-RAD).

## M3D-RAD Model
You can find our model in [M3D-RAD_Models](https://huggingface.co/Tang-xiaoxiao/M3D-RAD).

## Data Source
The original CT scans in our dataset are derived from [CT-RATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE), which is released under a CC-BY-NC-SA license. We fully comply with the license terms by using the data for non-commercial academic research, providing proper attribution.

## Model Links

| Model | Paper                                                        |
| ----- | ------------------------------------------------------------ |
| [RadFM](https://github.com/chaoyi-wu/RadFM) | Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical Data | https://github.com/chaoyi-wu/RadFM |
| [M3D](https://github.com/BAAI-DCAI/M3D)   | M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models |
| OmniV(not open) | OmniV-Med: Scaling Medical Vision-Language Model for Universal Visual Understanding |