license: apache-2.0
tags:
- image-segmentation
- segment-anything
- segment-anything-2
- onnx
- onnxruntime
library_name: onnxruntime
Segment Anything 2.1 (SAM 2.1) — ONNX Models
ONNX-exported versions of Meta's Segment Anything Model 2.1 (SAM 2.1) — an improved version of SAM 2 with better accuracy and robustness — ready for CPU/GPU inference with ONNX Runtime.
These models are used by AnyLabeling for AI-assisted image annotation, and exported by samexporter.
Available Models
| File | Variant | Notes |
|---|---|---|
sam2.1_hiera_tiny_20260221.zip |
SAM 2.1 Hiera-Tiny | Smallest, fastest |
sam2.1_hiera_small_20260221.zip |
SAM 2.1 Hiera-Small | Good balance |
sam2.1_hiera_base_plus_20260221.zip |
SAM 2.1 Hiera-Base+ | Higher accuracy |
sam2.1_hiera_large_20260221.zip |
SAM 2.1 Hiera-Large | Most accurate |
Each zip contains two ONNX files: an encoder (runs once per image) and a decoder (runs interactively for each prompt).
What's Improved vs SAM 2?
SAM 2.1 offers improved segmentation accuracy and better handling of edge cases compared to SAM 2. The ONNX conversion process is identical, and both use the same inference API.
Prompt Types
- Point (
+point/-point): click to include/exclude regions - Rectangle: draw a bounding box around the target object
Use with AnyLabeling (Recommended)
AnyLabeling is a desktop annotation tool with a built-in model manager that downloads, caches, and runs these models automatically — no coding required.
- Install:
pip install anylabeling - Launch:
anylabeling - Click the Brain button → select a Segment Anything 2.1 model from the dropdown
- Use point or rectangle prompts to segment objects
Use Programmatically with ONNX Runtime
import urllib.request, zipfile
url = "https://huggingface.co/vietanhdev/segment-anything-2.1-onnx-models/resolve/main/sam2.1_hiera_tiny_20260221.zip"
urllib.request.urlretrieve(url, "sam2.1_hiera_tiny.zip")
with zipfile.ZipFile("sam2.1_hiera_tiny.zip") as z:
z.extractall("sam2.1_hiera_tiny")
Then use samexporter's inference module:
pip install samexporter
python -m samexporter.inference \
--encoder_model sam2.1_hiera_tiny/sam2.1_hiera_tiny.encoder.onnx \
--decoder_model sam2.1_hiera_tiny/sam2.1_hiera_tiny.decoder.onnx \
--image photo.jpg \
--prompt prompt.json \
--output result.png \
--sam_variant sam2
Re-export from Source
To re-export or customize the models using samexporter:
pip install samexporter
pip install git+https://github.com/facebookresearch/segment-anything-2.git
# Download SAM 2.1 checkpoints
bash download_all_models.sh
# Export Tiny variant
python -m samexporter.export_sam2 \
--checkpoint original_models/sam2.1_hiera_tiny.pt \
--output_encoder output_models/sam2.1_hiera_tiny.encoder.onnx \
--output_decoder output_models/sam2.1_hiera_tiny.decoder.onnx \
--model_type sam2.1_hiera_tiny
# Or convert all SAM 2 and SAM 2.1 variants at once:
bash convert_all_meta_sam2.sh
Related Repositories
| Repo | Description |
|---|---|
| vietanhdev/samexporter | Export scripts, inference code, conversion tools |
| vietanhdev/anylabeling | Desktop annotation app powered by these models |
| vietanhdev/segment-anything-2-onnx-models | Original SAM 2 ONNX models |
| facebookresearch/segment-anything-2 | Original SAM 2 / SAM 2.1 by Meta |
License
The ONNX models are derived from Meta's SAM 2.1, released under the Apache 2.0 license. The export code is part of samexporter, released under the MIT license.
