File size: 10,159 Bytes
3bc63c0 6c940c1 3bc63c0 6c940c1 3bc63c0 6c940c1 3bc63c0 6c940c1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 |
---
license: mit
pipeline_tag: object-detection
---
# *EmbodiedSAM*: Online Segment Any 3D Thing in Real Time
[📚 Paper](https://arxiv.org/abs/2408.11811)
*EmbodiedSAM* is an efficient framework that leverages vision foundation models for **online**, **real-time**, **fine-grained**, **generalized** and **open-vocabulary** 3D instance segmentation.
## Abstract
Embodied tasks require the agent to fully understand 3D scenes simultaneously with its exploration, so an online, real-time, fine-grained and highly-generalized 3D perception model is desperately needed. Since high-quality 3D data is limited, directly training such a model in 3D is almost infeasible. Meanwhile, vision foundation models (VFM) has revolutionized the field of 2D computer vision with superior performance, which makes the use of VFM to assist embodied 3D perception a promising direction. However, most existing VFM-assisted 3D perception methods are either offline or too slow that cannot be applied in practical embodied tasks. In this paper, we aim to leverage Segment Anything Model (SAM) for real-time 3D instance segmentation in an online setting. This is a challenging problem since future frames are not available in the input streaming RGB-D video, and an instance may be observed in several frames so object matching between frames is required. To address these challenges, we first propose a geometric-aware query lifting module to represent the 2D masks generated by SAM by 3D-aware queries, which is then iteratively refined by a dual-level query decoder. In this way, the 2D masks are transferred to fine-grained shapes on 3D point clouds. Benefit from the query representation for 3D masks, we can compute the similarity matrix between the 3D masks from different views by efficient matrix operation, which enables real-time inference. Experiments on ScanNet, ScanNet200, SceneNN and 3RScan show our method achieves leading performance even compared with offline methods. Our method also demonstrates great generalization ability in several zero-shot dataset transferring experiments and show great potential in open-vocabulary and data-efficient setting. Code and demo are available at https://xuxw98.github.io/ESAM/, with only one RTX 3090 GPU required for training and evaluation.
The official code is publicly release in this [repo](https://github.com/xuxw98/ESAM).
## Citation
```
@article{xu2024esam,
title={EmbodiedSAM: Online Segment Any 3D Thing in Real Time},
author={Xiuwei Xu and Huangxing Chen and Linqing Zhao and Ziwei Wang and Jie Zhou and Jiwen Lu},
journal={arXiv preprint arXiv:2408.11811},
year={2024}
}
```
## Main Results
We provide the checkpoints for quick reproduction of the results reported in the paper.
**Class-agnostic 3D instance segmentation results on ScanNet200 dataset:**
| Method | Type | VFM | AP | AP@50 | AP@25 | Speed(ms) | Downloads |
| :-----------------------------------------------------: | :-----: | :---------------------------------------------------------: | :------: | :------: | :------: | :-----------: | :----------------------------------------------------------: |
| [SAMPro3D](https://github.com/GAP-LAB-CUHK-SZ/SAMPro3D) | Offline | [SAM](https://github.com/facebookresearch/segment-anything) | 18.0 | 32.8 | 56.1 | -- | -- |
| [SAI3D](https://github.com/yd-yin/SAI3D) | Offline | [SemanticSAM](https://github.com/UX-Decoder/Semantic-SAM) | 30.8 | 50.5 | 70.6 | -- | -- |
| [SAM3D](https://github.com/Pointcept/SegmentAnything3D) | Online | SAM | 20.6 | 35.7 | 55.5 | 1369+1518 | -- |
| ESAM | Online | SAM | 42.2 | 63.7 | 79.6 | 1369+**80** | [model](https://huggingface.co/XXXCARREY/EmbodiedSAM/blob/main/ESAM_CA_online_epoch_128.pth) |
| ESAM-E | Online | [FastSAM](https://github.com/CASIA-IVA-Lab/FastSAM) | **43.4** | **65.4** | **80.9** | **20**+**80** | [model](https://huggingface.co/XXXCARREY/EmbodiedSAM/blob/main/ESAM-E_CA_online_epoch_128.pth) |
**Dataset transfer results from ScanNet200 to SceneNN and 3RScan:**
<table class="tg"><thead>
<tr>
<th class="tg-b2st" rowspan="2">Method</th>
<th class="tg-b2st" rowspan="2">Type </th>
<th class="tg-b2st" colspan="3">ScanNet200-->SceneNN</th>
<th class="tg-b2st" colspan="3">ScanNet200-->3RScan</th>
</tr>
<tr>
<th class="tg-wa1i">AP</th>
<th class="tg-wa1i">AP@50</th>
<th class="tg-wa1i">AP@25</th>
<th class="tg-wa1i">AP</th>
<th class="tg-wa1i">AP@50</th>
<th class="tg-wa1i">AP@25</th>
</tr></thead>
<tbody>
<tr>
<td class="tg-nrix">SAMPro3D</td>
<td class="tg-nrix">Offline</td>
<td class="tg-nrix">12.6</td>
<td class="tg-nrix">25.8</td>
<td class="tg-nrix">53.2</td>
<td class="tg-nrix">3.9</td>
<td class="tg-nrix">8.0</td>
<td class="tg-nrix">21.0</td>
</tr>
<tr>
<td class="tg-nrix">SAI3D</td>
<td class="tg-nrix">Offline</td>
<td class="tg-nrix">18.6</td>
<td class="tg-nrix">34.7</td>
<td class="tg-nrix">65.7</td>
<td class="tg-nrix">5.4</td>
<td class="tg-nrix">11.8</td>
<td class="tg-nrix">27.4</td>
</tr>
<tr>
<td class="tg-nrix">SAM3D</td>
<td class="tg-nrix">Online</td>
<td class="tg-nrix">15.1</td>
<td class="tg-nrix">30.0</td>
<td class="tg-nrix">51.8</td>
<td class="tg-nrix">6.2</td>
<td class="tg-nrix">13.0</td>
<td class="tg-nrix">33.9</td>
</tr>
<tr>
<td class="tg-nrix">ESAM</td>
<td class="tg-nrix">Online</td>
<td class="tg-nrix"><b>28.8</b></td>
<td class="tg-nrix"><b>52.2</b></td>
<td class="tg-nrix">69.3</td>
<td class="tg-nrix"><b>14.1</b></td>
<td class="tg-nrix"><b>31.2</b></td>
<td class="tg-nrix"><b>59.6</b></td>
</tr>
<tr>
<td class="tg-nrix">ESAM-E</td>
<td class="tg-nrix">Online</td>
<td class="tg-nrix">28.6</td>
<td class="tg-nrix">50.4</td>
<td class="tg-nrix"><b>71.0</b></td>
<td class="tg-nrix">13.9</td>
<td class="tg-nrix">29.4</td>
<td class="tg-nrix">58.8</td>
</tr>
</tbody></table>
**3D instance segmentation results on ScanNet dataset:**
<table class="tg"><thead>
<tr>
<th class="tg-gabo" rowspan="2">Method</th>
<th class="tg-gabo" rowspan="2">Type</th>
<th class="tg-gabo" colspan="3">ScanNet</th>
<th class="tg-gabo" colspan="3">SceneNN</th>
<th class="tg-gabo" rowspan="2">FPS</th>
<th class="tg-gabo" rowspan="2">Download</th>
</tr>
<tr>
<th class="tg-uzvj">AP</th>
<th class="tg-uzvj">AP@50</th>
<th class="tg-uzvj">AP@25</th>
<th class="tg-uzvj">AP</th>
<th class="tg-uzvj">AP@50</th>
<th class="tg-uzvj">AP@25</th>
</tr></thead>
<tbody>
<tr>
<td class="tg-9wq8"><a href=https://github.com/SamsungLabs/td3d>TD3D</a></td>
<td class="tg-9wq8">offline</td>
<td class="tg-9wq8">46.2</td>
<td class="tg-9wq8">71.1</td>
<td class="tg-9wq8">81.3</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
</tr>
<tr>
<td class="tg-9wq8"><a href=https://github.com/oneformer3d/oneformer3d>Oneformer3D</a></td>
<td class="tg-9wq8">offline</td>
<td class="tg-9wq8">59.3</td>
<td class="tg-9wq8">78.8</td>
<td class="tg-9wq8">86.7</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
</tr>
<tr>
<td class="tg-9wq8"><a href=https://github.com/THU-luvision/INS-Conv>INS-Conv</a></td>
<td class="tg-9wq8">online</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">57.4</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
<td class="tg-9wq8">--</td>
</tr>
<tr>
<td class="tg-9wq8"><a href=https://github.com/xuxw98/Online3D>TD3D-MA</a></td>
<td class="tg-9wq8">online</td>
<td class="tg-9wq8">39.0</td>
<td class="tg-9wq8">60.5</td>
<td class="tg-9wq8">71.3</td>
<td class="tg-9wq8">26.0</td>
<td class="tg-9wq8">42.8</td>
<td class="tg-9wq8">59.2</td>
<td class="tg-9wq8">3.5</td>
<td class="tg-9wq8">--</td>
</tr>
<tr>
<td class="tg-9wq8">ESAM-E</td>
<td class="tg-9wq8">online</td>
<td class="tg-9wq8">41.6</td>
<td class="tg-9wq8">60.1</td>
<td class="tg-9wq8">75.6</td>
<td class="tg-9wq8">27.5</td>
<td class="tg-9wq8">48.7</td>
<td class="tg-uzvj"><b>64.6</b></td>
<td class="tg-uzvj"><b>10</b></td>
<td class="tg-9wq8"><a href=https://huggingface.co/XXXCARREY/EmbodiedSAM/blob/main/ESAM-E_online_epoch_128.pth=1>model</a></td>
</tr>
<tr>
<td class="tg-nrix">ESAM-E+FF</td>
<td class="tg-nrix">online</td>
<td class="tg-wa1i"><b>42.6</b></td>
<td class="tg-wa1i"><b>61.9</b></td>
<td class="tg-wa1i"><b>77.1</b></td>
<td class="tg-wa1i"><b>33.3</b></td>
<td class="tg-wa1i"><b>53.6</b></td>
<td class="tg-nrix">62.5</td>
<td class="tg-nrix">9.8</td>
<td class="tg-nrix"><a href=https://huggingface.co/XXXCARREY/EmbodiedSAM/blob/main/ESAM-E_FF_online_epoch_128.pth=1>model</a></td>
</tr>
</tbody></table>
**Open-Vocabulary 3D instance segmentation results on ScanNet200 dataset:**
| Method | AP | AP@50 | AP@25 |
| :----: | :------: | :------: | :------: |
| SAI3D | 9.6 | 14.7 | 19.0 |
| ESAM | **13.7** | **19.2** | **23.9** | |