Tang-xiaoxiao commited on
Commit
ca087e7
Β·
verified Β·
1 Parent(s): 1b9c32f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -3
README.md CHANGED
@@ -1,3 +1,47 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # 3D-RAD
5
+ The official Dataset for the paper "3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks".
6
+
7
+ In our project, we collect a large-scale dataset designed to advance 3D Med-VQA using radiology CT scans, 3D-RAD, encompasses six diverse VQA tasks: anomaly detection (task 1), image observation (task 2), medical computation (task 3), existence detection (task 4), static temporal diagnosis (task 5), and longitudinal temporal diagnosis (task 6).
8
+
9
+ ![Main Figure](https://github.com/Tang-xiaoxiao/M3D-RAD/blob/main/Figures/main.png?raw=true)
10
+
11
+
12
+ ## πŸ“ Images/
13
+ This folder contains preprocessed 3D CT volumes in `.npy` format.
14
+ Each file is structured to facilitate direct input into vision-language models.
15
+ - Purpose: Standardized model input across all tasks.
16
+
17
+ ## πŸ“ train/ and πŸ“ test/
18
+ These folders contain the question-answer (QA) pairs categorized by task.
19
+ Each file corresponds to a specific QA task such as anomaly detection, measurement, or temporal reasoning.
20
+
21
+ - `train/`: QA pairs for model training
22
+ - `test/`: QA pairs for model evaluation
23
+
24
+ # Fields:
25
+ - `VolumeName`: File name of the associated CT volume (matches the file in `Images/`)
26
+ - `Question`: The natural language question
27
+ - `Answer`: The ground truth answer
28
+ - `QuestionType`: Either `open` or `closed`
29
+ - `AnswerChoice`: Correct option (A/B/C/D) for closed questions
30
+ - `Choice A`–`Choice D`: Candidate options for closed questions
31
+
32
+ ## Code
33
+ You can find our code in [M3D-RAD_Code](https://github.com/Tang-xiaoxiao/M3D-RAD).
34
+
35
+ ## M3D-RAD Model
36
+ You can find our model in [M3D-RAD_Models](https://huggingface.co/Tang-xiaoxiao/M3D-RAD).
37
+
38
+ ## Data Source
39
+ The original CT scans in our dataset are derived from [CT-RATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE), which is released under a CC-BY-NC-SA license. We fully comply with the license terms by using the data for non-commercial academic research, providing proper attribution.
40
+
41
+ ## Model Links
42
+
43
+ | Model | Paper |
44
+ | ----- | ------------------------------------------------------------ |
45
+ | [RadFM](https://github.com/chaoyi-wu/RadFM) | Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical Data | https://github.com/chaoyi-wu/RadFM |
46
+ | [M3D](https://github.com/BAAI-DCAI/M3D) | M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models |
47
+ | OmniV(not open) | OmniV-Med: Scaling Medical Vision-Language Model for Universal Visual Understanding |