Segment Anything (SAM + MobileSAM) β ONNX Models
ONNX-exported versions of Meta's Segment Anything Model (SAM) and MobileSAM, ready for CPU/GPU inference with ONNX Runtime β no PyTorch required at runtime.
These models are used by AnyLabeling for AI-assisted image annotation, and exported by samexporter.
Available Models
| File | Variant | Encoder size | Notes |
|---|---|---|---|
sam_vit_b_01ec64.zip |
SAM ViT-B | ~90 MB | Fastest, lowest accuracy |
sam_vit_b_01ec64_quant.zip |
SAM ViT-B (Quant) | ~25 MB | Quantized β smaller & faster |
sam_vit_l_0b3195.zip |
SAM ViT-L | ~330 MB | Good balance |
sam_vit_l_0b3195_quant.zip |
SAM ViT-L (Quant) | ~83 MB | Quantized β smaller & faster |
sam_vit_h_4b8939.zip |
SAM ViT-H | ~630 MB | Highest accuracy |
sam_vit_h_4b8939_quant.zip |
SAM ViT-H (Quant) | ~158 MB | Quantized β smaller & faster |
mobile_sam_20230629.zip |
MobileSAM | ~9 MB | Ultra-lightweight |
Each zip contains two ONNX files: an encoder (runs once per image) and a decoder (runs interactively for each prompt).
Prompt Types
- Point (
+point/-point): click to include/exclude regions - Rectangle: draw a bounding box around the target object
Use with AnyLabeling (Recommended)
AnyLabeling is a desktop annotation tool with a built-in model manager that downloads, caches, and runs these models automatically β no coding required.
- Install:
pip install anylabeling - Launch:
anylabeling - Click the Brain button β select a SAM model from the dropdown
- Use point or rectangle prompts to segment objects
Use Programmatically with ONNX Runtime
import urllib.request, zipfile, pathlib
# Download and extract
url = "https://huggingface.co/vietanhdev/segment-anything-onnx-models/resolve/main/sam_vit_b_01ec64.zip"
urllib.request.urlretrieve(url, "sam_vit_b_01ec64.zip")
with zipfile.ZipFile("sam_vit_b_01ec64.zip") as z:
z.extractall("sam_vit_b_01ec64")
Then use samexporter's inference module:
pip install samexporter
python -m samexporter.inference \
--encoder_model sam_vit_b_01ec64/sam_vit_b_encoder.onnx \
--decoder_model sam_vit_b_01ec64/sam_vit_b_decoder.onnx \
--image photo.jpg \
--prompt prompt.json \
--output result.png
Re-export from Source
To re-export or customize the models using samexporter:
pip install samexporter
# Export SAM ViT-H encoder + decoder
python -m samexporter.export_encoder \
--checkpoint original_models/sam_vit_h_4b8939.pth \
--output output_models/sam_vit_h_4b8939.encoder.onnx \
--model-type vit_h --use-preprocess
python -m samexporter.export_decoder \
--checkpoint original_models/sam_vit_h_4b8939.pth \
--output output_models/sam_vit_h_4b8939.decoder.onnx \
--model-type vit_h --return-single-mask
# Or convert all SAM variants at once:
bash convert_all_meta_sam.sh
Related Repositories
| Repo | Description |
|---|---|
| vietanhdev/samexporter | Export scripts, inference code, conversion tools |
| vietanhdev/anylabeling | Desktop annotation app powered by these models |
| facebookresearch/segment-anything | Original SAM by Meta |
| ChaoningZhang/MobileSAM | Original MobileSAM |
License
The ONNX models are derived from Meta's SAM and MobileSAM, both released under the Apache 2.0 license. The export code is part of samexporter, released under the MIT license.
