--- license: apache-2.0 pipeline_tag: image-text-to-text library_name: transformers --- # M3D-RAD Model The official Model for the paper "[3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks](https://huggingface.co/papers/2506.11147)". In our project, we collect a large-scale dataset designed to advance 3D Med-VQA using radiology CT scans, 3D-RAD, encompasses six diverse VQA tasks: anomaly detection (task 1), image observation (task 2), medical computation (task 3), existence detection (task 4), static temporal diagnosis (task 5), and longitudinal temporal diagnosis (task 6). ![Main Figure](https://github.com/Tang-xiaoxiao/M3D-RAD/blob/main/Figures/main.png?raw=true) ## Code You can find our code in [M3D-RAD_Code](https://github.com/Tang-xiaoxiao/M3D-RAD). ## 3D-RAD Dataset You can find our dataset in [3D-RAD_Dataset](https://huggingface.co/datasets/Tang-xiaoxiao/3D-RAD). ## Model Links | Model | Paper | | ----- | ------------------------------------------------------------ | | [RadFM](https://github.com/chaoyi-wu/RadFM) | Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical Data | https://github.com/chaoyi-wu/RadFM | | [M3D](https://github.com/BAAI-DCAI/M3D) | M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models | | OmniV(not open) | OmniV-Med: Scaling Medical Vision-Language Model for Universal Visual Understanding |