| | ---
|
| | license: apache-2.0
|
| | tags:
|
| | - image-segmentation
|
| | - segment-anything
|
| | - segment-anything-2
|
| | - onnx
|
| | - onnxruntime
|
| | library_name: onnxruntime
|
| | ---
|
| |
|
| | # Segment Anything 2.1 (SAM 2.1) β ONNX Models
|
| |
|
| | ONNX-exported versions of Meta's **Segment Anything Model 2.1 (SAM 2.1)** β an improved version of [SAM 2](https://github.com/facebookresearch/segment-anything-2) with better accuracy and robustness β ready for CPU/GPU inference with [ONNX Runtime](https://onnxruntime.ai/).
|
| |
|
| | These models are used by **[AnyLabeling](https://github.com/vietanhdev/anylabeling)** for AI-assisted image annotation, and exported by **[samexporter](https://github.com/vietanhdev/samexporter)**.
|
| |
|
| | ## Available Models
|
| |
|
| | | File | Variant | Notes |
|
| | |------|---------|-------|
|
| | | `sam2.1_hiera_tiny_20260221.zip` | SAM 2.1 Hiera-Tiny | Smallest, fastest |
|
| | | `sam2.1_hiera_small_20260221.zip` | SAM 2.1 Hiera-Small | Good balance |
|
| | | `sam2.1_hiera_base_plus_20260221.zip` | SAM 2.1 Hiera-Base+ | Higher accuracy |
|
| | | `sam2.1_hiera_large_20260221.zip` | SAM 2.1 Hiera-Large | Most accurate |
|
| |
|
| | Each zip contains two ONNX files: an **encoder** (runs once per image) and a **decoder** (runs interactively for each prompt).
|
| |
|
| | ## What's Improved vs SAM 2?
|
| |
|
| | SAM 2.1 offers improved segmentation accuracy and better handling of edge cases compared to SAM 2. The ONNX conversion process is identical, and both use the same inference API.
|
| |
|
| | ## Prompt Types
|
| |
|
| | - **Point** (`+point` / `-point`): click to include/exclude regions
|
| | - **Rectangle**: draw a bounding box around the target object
|
| |
|
| | ## Use with AnyLabeling (Recommended)
|
| |
|
| | [AnyLabeling](https://github.com/vietanhdev/anylabeling) is a desktop annotation tool with a built-in model manager that downloads, caches, and runs these models automatically β no coding required.
|
| |
|
| | 1. Install: `pip install anylabeling`
|
| | 2. Launch: `anylabeling`
|
| | 3. Click the **Brain** button β select a **Segment Anything 2.1** model from the dropdown
|
| | 4. Use point or rectangle prompts to segment objects
|
| |
|
| | [](https://github.com/vietanhdev/anylabeling)
|
| |
|
| | ## Use Programmatically with ONNX Runtime
|
| |
|
| | ```python
|
| | import urllib.request, zipfile
|
| | url = "https://huggingface.co/vietanhdev/segment-anything-2.1-onnx-models/resolve/main/sam2.1_hiera_tiny_20260221.zip"
|
| | urllib.request.urlretrieve(url, "sam2.1_hiera_tiny.zip")
|
| | with zipfile.ZipFile("sam2.1_hiera_tiny.zip") as z:
|
| | z.extractall("sam2.1_hiera_tiny")
|
| | ```
|
| |
|
| | Then use [samexporter](https://github.com/vietanhdev/samexporter)'s inference module:
|
| |
|
| | ```bash
|
| | pip install samexporter
|
| | python -m samexporter.inference \
|
| | --encoder_model sam2.1_hiera_tiny/sam2.1_hiera_tiny.encoder.onnx \
|
| | --decoder_model sam2.1_hiera_tiny/sam2.1_hiera_tiny.decoder.onnx \
|
| | --image photo.jpg \
|
| | --prompt prompt.json \
|
| | --output result.png \
|
| | --sam_variant sam2
|
| | ```
|
| |
|
| | ## Re-export from Source
|
| |
|
| | To re-export or customize the models using [samexporter](https://github.com/vietanhdev/samexporter):
|
| |
|
| | ```bash
|
| | pip install samexporter
|
| | pip install git+https://github.com/facebookresearch/segment-anything-2.git
|
| |
|
| | # Download SAM 2.1 checkpoints
|
| | bash download_all_models.sh
|
| |
|
| | # Export Tiny variant
|
| | python -m samexporter.export_sam2 \
|
| | --checkpoint original_models/sam2.1_hiera_tiny.pt \
|
| | --output_encoder output_models/sam2.1_hiera_tiny.encoder.onnx \
|
| | --output_decoder output_models/sam2.1_hiera_tiny.decoder.onnx \
|
| | --model_type sam2.1_hiera_tiny
|
| |
|
| | # Or convert all SAM 2 and SAM 2.1 variants at once:
|
| | bash convert_all_meta_sam2.sh
|
| | ```
|
| |
|
| | ## Related Repositories
|
| |
|
| | | Repo | Description |
|
| | |------|-------------|
|
| | | [vietanhdev/samexporter](https://github.com/vietanhdev/samexporter) | Export scripts, inference code, conversion tools |
|
| | | [vietanhdev/anylabeling](https://github.com/vietanhdev/anylabeling) | Desktop annotation app powered by these models |
|
| | | [vietanhdev/segment-anything-2-onnx-models](https://huggingface.co/vietanhdev/segment-anything-2-onnx-models) | Original SAM 2 ONNX models |
|
| | | [facebookresearch/segment-anything-2](https://github.com/facebookresearch/segment-anything-2) | Original SAM 2 / SAM 2.1 by Meta |
|
| |
|
| | ## License
|
| |
|
| | The ONNX models are derived from Meta's SAM 2.1, released under the **Apache 2.0** license.
|
| | The export code is part of [samexporter](https://github.com/vietanhdev/samexporter), released under the **MIT** license.
|
| |
|