File size: 4,637 Bytes
bd58a30 0f81c13 bd58a30 0f81c13 bd58a30 0f81c13 bd58a30 1ceeecc d0b3c85 fee4121 abb1fdc fee4121 0f81c13 1ceeecc 96d2705 1ceeecc d0b3c85 895dff6 d0b3c85 0f81c13 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
datasets:
- medical
language: en
library_name: torch
license: cc-by-sa-4.0
pipeline_tag: image-segmentation
tags:
- medical
- segmentation
- sam
- medical-imaging
- ct
- mri
- ultrasound
---
# MedSAM2: Segment Anything in 3D Medical Images and Videos
<div align="center">
<table align="center">
<tr>
<td><a href="https://arxiv.org/abs/2504.03600" target="_blank"><img src="https://img.shields.io/badge/arXiv-Paper-FF6B6B?style=for-the-badge&logo=arxiv&logoColor=white" alt="Paper"></a></td>
<td><a href="https://github.com/bowang-lab/MedSAM2" target="_blank"><img src="https://img.shields.io/badge/GitHub-Code-181717?style=for-the-badge&logo=github&logoColor=white" alt="Code"></a></td>
<td><a href="https://huggingface.co/wanglab/MedSAM2" target="_blank"><img src="https://img.shields.io/badge/HuggingFace-Model-FFBF00?style=for-the-badge&logo=huggingface&logoColor=white" alt="HuggingFace Model"></a></td>
</tr>
<tr>
<td><a href="https://medsam-datasetlist.github.io/" target="_blank"><img src="https://img.shields.io/badge/Dataset-List-00B89E?style=for-the-badge" alt="Dataset List"></a></td>
<td><a href="https://huggingface.co/datasets/wanglab/CT_DeepLesion-MedSAM2" target="_blank"><img src="https://img.shields.io/badge/Dataset-CT__DeepLesion-28A745?style=for-the-badge" alt="CT_DeepLesion-MedSAM2"></a></td>
<td><a href="https://huggingface.co/datasets/wanglab/LLD-MMRI-MedSAM2" target="_blank"><img src="https://img.shields.io/badge/Dataset-LLD--MMRI-FF6B6B?style=for-the-badge" alt="LLD-MMRI-MedSAM2"></a></td>
</tr>
<tr>
<td><a href="https://github.com/bowang-lab/MedSAMSlicer/tree/MedSAM2" target="_blank"><img src="https://img.shields.io/badge/3D_Slicer-Plugin-000000?style=for-the-badge" alt="3D Slicer"></a></td>
<td><a href="https://github.com/bowang-lab/MedSAM2/blob/main/app.py" target="_blank"><img src="https://img.shields.io/badge/Gradio-Demo-F9D371?style=for-the-badge&logo=gradio&logoColor=white" alt="Gradio App"></a></td>
<td><a href="https://colab.research.google.com/drive/1MKna9Sg9c78LNcrVyG58cQQmaePZq2k2?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Colab-Notebook-F9AB00?style=for-the-badge&logo=googlecolab&logoColor=white" alt="Colab"></a></td>
</tr>
</table>
</div>
[Project Page](https://medsam2.github.io/)
## Authors
<p align="center">
<a href="https://scholar.google.com.hk/citations?hl=en&user=bW1UV4IAAAAJ&view_op=list_works&sortby=pubdate">Jun Ma</a><sup>* 1,2</sup>,
<a href="https://scholar.google.com/citations?user=8IE0CfwAAAAJ&hl=en">Zongxin Yang</a><sup>* 3</sup>,
Sumin Kim<sup>2,4,5</sup>,
Bihui Chen<sup>2,4,5</sup>,
<a href="https://scholar.google.com.hk/citations?user=U-LgNOwAAAAJ&hl=en&oi=sra">Mohammed Baharoon</a><sup>2,3,5</sup>,<br>
<a href="https://scholar.google.com.hk/citations?user=4qvKTooAAAAJ&hl=en&oi=sra">Adibvafa Fallahpour</a><sup>2,4,5</sup>,
<a href="https://scholar.google.com.hk/citations?user=UlTJ-pAAAAAJ&hl=en&oi=sra">Reza Asakereh</a><sup>4,7</sup>,
Hongwei Lyu<sup>4</sup>,
<a href="https://wanglab.ai/index.html">Bo Wang</a><sup>† 1,2,4,5,6</sup>
</p>
<p align="center">
<sup>*</sup> Equal contribution <sup>†</sup> Corresponding author
</p>
<p align="center">
<sup>1</sup>AI Collaborative Centre, University Health Network, Toronto, Canada<br>
<sup>2</sup>Vector Institute for Artificial Intelligence, Toronto, Canada<br>
<sup>3</sup>Department of Biomedical Informatics, Harvard Medical School, Harvard University, Boston, USA<br>
<sup>4</sup>Peter Munk Cardiac Centre, University Health Network, Toronto, Canada<br>
<sup>5</sup>Department of Computer Science, University of Toronto, Toronto, Canada<br>
<sup>6</sup>Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada<br>
<sup>7</sup>Roche Canada and Genentech
</p>
## Highlights
- A promptable foundation model for 3D medical image and video segmentation
- Trained on 455,000+ 3D image-mask pairs and 76,000+ annotated video frames
- Versatile segmentation capability across diverse organs and pathologies
- Extensive user studies in large-scale lesion and video datasets demonstrate that MedSAM2 substantially facilitates annotation workflows
## Model Overview
MedSAM2 is a promptable segmentation segmentation model tailored for medical imaging applications. Built upon the foundation of the [Segment Anything Model (SAM) 2.1](https://github.com/facebookresearch/sam2), MedSAM2 has been specifically adapted and fine-tuned for various 3D medical images and videos.
<!-- rest of the model card --> |