Instructions to use VAST-AI/GeoSAM2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sam2
How to use VAST-AI/GeoSAM2 with sam2:
# Use SAM2 with images import torch from sam2.sam2_image_predictor import SAM2ImagePredictor predictor = SAM2ImagePredictor.from_pretrained(VAST-AI/GeoSAM2) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): predictor.set_image(<your_image>) masks, _, _ = predictor.predict(<input_prompts>)# Use SAM2 with videos import torch from sam2.sam2_video_predictor import SAM2VideoPredictor predictor = SAM2VideoPredictor.from_pretrained(VAST-AI/GeoSAM2) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): state = predictor.init_state(<your_video>) # add new prompts and instantly get the output on the same frame frame_idx, object_ids, masks = predictor.add_new_points(state, <your_prompts>): # propagate the prompts to get masklets throughout the video for frame_idx, object_ids, masks in predictor.propagate_in_video(state): ... - Notebooks
- Google Colab
- Kaggle
Release cleaned fp32 checkpoint, add bf16 variant, complete model card
Browse files- README.md +128 -4
- geosam2-bf16.pt +3 -0
- geosam2.pt +2 -2
README.md
CHANGED
|
@@ -1,9 +1,133 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
|
| 6 |
|
| 7 |
-
|
| 8 |
|
| 9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
library_name: pytorch
|
| 4 |
+
pipeline_tag: mask-generation
|
| 5 |
+
tags:
|
| 6 |
+
- 3d
|
| 7 |
+
- mesh
|
| 8 |
+
- 3d-part-segmentation
|
| 9 |
+
- sam2
|
| 10 |
+
- segmentation
|
| 11 |
+
- point-cloud
|
| 12 |
+
- geosam2
|
| 13 |
+
base_model: facebook/sam2.1-hiera-base-plus
|
| 14 |
+
language:
|
| 15 |
+
- en
|
| 16 |
---
|
| 17 |
|
| 18 |
+
# GeoSAM2
|
| 19 |
|
| 20 |
+
> Unleashing the Power of SAM2 for 3D Part Segmentation · CVPR 2026
|
| 21 |
|
| 22 |
+
<div align="center">
|
| 23 |
+
|
| 24 |
+
[](https://detailgen3d.github.io/GeoSAM2/)
|
| 25 |
+
[](https://arxiv.org/abs/2508.14036)
|
| 26 |
+
[](https://github.com/VAST-AI-Research/GeoSAM2)
|
| 27 |
+
[](https://www.apache.org/licenses/LICENSE-2.0)
|
| 28 |
+
|
| 29 |
+
</div>
|
| 30 |
+
|
| 31 |
+
GeoSAM2 lifts [SAM2](https://github.com/facebookresearch/sam2) from images to
|
| 32 |
+
3D meshes. Given a multi-view rendering of a mesh and an interactive prompt
|
| 33 |
+
(a single 2D click or a 2D mask) on one view, it propagates a consistent
|
| 34 |
+
segmentation across all views and back-projects the result to per-face 3D
|
| 35 |
+
part labels.
|
| 36 |
+
|
| 37 |
+
This repository hosts the **pretrained inference checkpoint** (`geosam2.pt`).
|
| 38 |
+
Code, configs, and a small multi-view demo dataset live in the companion
|
| 39 |
+
GitHub repo: <https://github.com/VAST-AI-Research/GeoSAM2>.
|
| 40 |
+
|
| 41 |
+
## Model summary
|
| 42 |
+
|
| 43 |
+
| | |
|
| 44 |
+
|---|---|
|
| 45 |
+
| Task | Interactive 3D part segmentation on meshes via multi-view 2D propagation |
|
| 46 |
+
| Base model | [`facebook/sam2.1-hiera-base-plus`](https://huggingface.co/facebook/sam2.1-hiera-base-plus) |
|
| 47 |
+
| Architecture | SAM2 (Hiera-B+ image encoder + memory attention + mask decoder), plus a dedicated **position-map encoder** for 3D geometry, **feature fusion**, and **LoRA adapters** on the image and position-map encoders |
|
| 48 |
+
| Parameters | ~154 M (fp32: ~588 MB · bf16: ~294 MB) |
|
| 49 |
+
| Input modalities | 12 rendered views per mesh: color (`.webp`), depth (`.exr`), normal (`.webp`), camera metadata (`meta.json`) |
|
| 50 |
+
| Prompts | 2D point clicks or a 2D mask on any view |
|
| 51 |
+
| Output | Per-view 2D label maps and per-face 3D labels for the input mesh |
|
| 52 |
+
| Render config | 12 azimuthally-spaced views at 1024×1024 from a fixed elevation |
|
| 53 |
+
|
| 54 |
+
## Quickstart
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
# 1. Clone the code
|
| 58 |
+
git clone https://github.com/VAST-AI-Research/GeoSAM2.git
|
| 59 |
+
cd GeoSAM2
|
| 60 |
+
pip install -r requirements.txt
|
| 61 |
+
pip install -e . # builds the optional CUDA op; set GEOSAM2_BUILD_CUDA=0 to skip
|
| 62 |
+
|
| 63 |
+
# 2. Download the checkpoint into ./ckpt
|
| 64 |
+
hf download VAST-AI/GeoSAM2 geosam2.pt --local-dir ckpt
|
| 65 |
+
|
| 66 |
+
# 3. Run the bundled demo (single-view point prompt -> 3D segmentation)
|
| 67 |
+
bash scripts/run_example.sh
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
Direct inference from a 2D mask:
|
| 71 |
+
|
| 72 |
+
```bash
|
| 73 |
+
python inference.py \
|
| 74 |
+
--data-root example/sample_00 \
|
| 75 |
+
--mask-path outputs/sample_00/2d_seg/mask_view0000.npy \
|
| 76 |
+
--mask-view 0 \
|
| 77 |
+
--postprocess-pa 0.02 \
|
| 78 |
+
--output-dir outputs/sample_00/3d_seg
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
See the [README](https://github.com/VAST-AI-Research/GeoSAM2#readme) for the
|
| 82 |
+
full usage guide, including rendering your own meshes with Blender.
|
| 83 |
+
|
| 84 |
+
## Files
|
| 85 |
+
|
| 86 |
+
| File | Size | Description |
|
| 87 |
+
|---|---|---|
|
| 88 |
+
| `geosam2.pt` | ~588 MB | Pretrained weights in `float32` (`{"model": state_dict}`). Default choice. |
|
| 89 |
+
| `geosam2-bf16.pt` | ~294 MB | Same weights cast to `bfloat16` for faster downloads / lower memory. Loaded by the standard code path — `load_state_dict` upcasts to the model dtype, so no extra steps are required. Expect a small reconstruction error of ≤ 0.015 per weight versus the fp32 file. |
|
| 90 |
+
|
| 91 |
+
Both checkpoints are loaded by
|
| 92 |
+
`sam2.build_sam.build_sam2_video_predictor_geosam2` with the Hydra config
|
| 93 |
+
`sam2/configs/geosam2.yaml`. Pass the chosen file via `--sam2-checkpoint`
|
| 94 |
+
(or use the default `ckpt/geosam2.pt` path expected by the scripts).
|
| 95 |
+
|
| 96 |
+
## Intended use
|
| 97 |
+
|
| 98 |
+
- **Intended**: interactive 3D part segmentation of single-object meshes for
|
| 99 |
+
research and content-creation tooling.
|
| 100 |
+
- **Out of scope**: scene-level segmentation, dynamic scenes, semantic
|
| 101 |
+
category prediction (the model produces instance-level part masks, not
|
| 102 |
+
semantic class labels), and safety-critical applications.
|
| 103 |
+
|
| 104 |
+
## Limitations
|
| 105 |
+
|
| 106 |
+
- Expects the 12-view rendering convention produced by `geosam2_render.py`;
|
| 107 |
+
arbitrary view counts or camera trajectories may degrade quality.
|
| 108 |
+
- The mesh must fit within the normalised cube used at render time
|
| 109 |
+
(`geosam2_render.py` handles this for the bundled samples).
|
| 110 |
+
- Performance on thin/wire-like geometry and on highly transparent surfaces
|
| 111 |
+
is still an open problem.
|
| 112 |
+
- The post-processing `--postprocess-pa` value sometimes needs hand-tuning
|
| 113 |
+
per mesh (`0.01`, `0.02`, `0.035` are useful starting points).
|
| 114 |
+
|
| 115 |
+
## License
|
| 116 |
+
|
| 117 |
+
Released under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
|
| 118 |
+
The checkpoint is a derivative of Meta's
|
| 119 |
+
[SAM2](https://github.com/facebookresearch/sam2) (Apache 2.0); see the
|
| 120 |
+
[`NOTICE`](https://github.com/VAST-AI-Research/GeoSAM2/blob/main/NOTICE)
|
| 121 |
+
file in the code repo for attribution.
|
| 122 |
+
|
| 123 |
+
## Citation
|
| 124 |
+
|
| 125 |
+
```bibtex
|
| 126 |
+
@article{deng2025geosam2,
|
| 127 |
+
title = {GeoSAM2: Unleashing the Power of SAM2 for 3D Part Segmentation},
|
| 128 |
+
author = {Deng, Ken and Yang, Yunhan and Sun, Jingxiang and
|
| 129 |
+
Liu, Xihui and Liu, Yebin and Liang, Ding and Cao, Yan-Pei},
|
| 130 |
+
journal = {arXiv preprint arXiv:2508.14036},
|
| 131 |
+
year = {2025}
|
| 132 |
+
}
|
| 133 |
+
```
|
geosam2-bf16.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0d169ad2a4ee273646bd44685fdfb72d565dd775a89ec6c13a06bb93a2d4ea84
|
| 3 |
+
size 308000410
|
geosam2.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2e391c0d9455fe92b69d61cdc94754d9b8081b9b541d09e1b6a3a55ebf6c6de0
|
| 3 |
+
size 615627435
|