vietanhdev commited on
Commit
146d646
·
verified ·
1 Parent(s): cd5a2fa

Add README

Browse files
Files changed (1) hide show
  1. README.md +104 -6
README.md CHANGED
@@ -1,6 +1,104 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
- # Segment Anything Model 2 (SAM 2) in ONNX format
5
-
6
- Converted with [samexporter](https://github.com/vietanhdev/samexporter).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - image-segmentation
5
+ - segment-anything
6
+ - segment-anything-2
7
+ - onnx
8
+ - onnxruntime
9
+ library_name: onnxruntime
10
+ ---
11
+
12
+ # Segment Anything 2 (SAM 2) — ONNX Models
13
+
14
+ ONNX-exported versions of Meta's [Segment Anything Model 2 (SAM 2)](https://github.com/facebookresearch/segment-anything-2), ready for CPU/GPU inference with [ONNX Runtime](https://onnxruntime.ai/) — no PyTorch required at runtime.
15
+
16
+ These models are used by **[AnyLabeling](https://github.com/vietanhdev/anylabeling)** for AI-assisted image annotation, and exported by **[samexporter](https://github.com/vietanhdev/samexporter)**.
17
+
18
+ > **Looking for SAM 2.1?** See [vietanhdev/segment-anything-2.1-onnx-models](https://huggingface.co/vietanhdev/segment-anything-2.1-onnx-models) — an improved version with better accuracy.
19
+
20
+ ## Available Models
21
+
22
+ | File | Variant | Notes |
23
+ |------|---------|-------|
24
+ | `sam2_hiera_tiny.zip` | SAM 2 Hiera-Tiny | Smallest, fastest |
25
+ | `sam2_hiera_small.zip` | SAM 2 Hiera-Small | Good balance |
26
+ | `sam2_hiera_base_plus.zip` | SAM 2 Hiera-Base+ | Higher accuracy |
27
+ | `sam2_hiera_large.zip` | SAM 2 Hiera-Large | Most accurate |
28
+
29
+ Each zip contains two ONNX files: an **encoder** (runs once per image) and a **decoder** (runs interactively for each prompt).
30
+
31
+ ## Prompt Types
32
+
33
+ - **Point** (`+point` / `-point`): click to include/exclude regions
34
+ - **Rectangle**: draw a bounding box around the target object
35
+
36
+ ## Use with AnyLabeling (Recommended)
37
+
38
+ [AnyLabeling](https://github.com/vietanhdev/anylabeling) is a desktop annotation tool with a built-in model manager that downloads, caches, and runs these models automatically — no coding required.
39
+
40
+ 1. Install: `pip install anylabeling`
41
+ 2. Launch: `anylabeling`
42
+ 3. Click the **Brain** button → select a **Segment Anything 2** model from the dropdown
43
+ 4. Use point or rectangle prompts to segment objects
44
+
45
+ [![AnyLabeling demo](https://user-images.githubusercontent.com/18329471/236625792-07f01838-3f69-48b0-a12e-30bad27bd921.gif)](https://github.com/vietanhdev/anylabeling)
46
+
47
+ ## Use Programmatically with ONNX Runtime
48
+
49
+ ```python
50
+ import urllib.request, zipfile
51
+ url = "https://huggingface.co/vietanhdev/segment-anything-2-onnx-models/resolve/main/sam2_hiera_tiny.zip"
52
+ urllib.request.urlretrieve(url, "sam2_hiera_tiny.zip")
53
+ with zipfile.ZipFile("sam2_hiera_tiny.zip") as z:
54
+ z.extractall("sam2_hiera_tiny")
55
+ ```
56
+
57
+ Then use [samexporter](https://github.com/vietanhdev/samexporter)'s inference module:
58
+
59
+ ```bash
60
+ pip install samexporter
61
+ python -m samexporter.inference \
62
+ --encoder_model sam2_hiera_tiny/sam2_hiera_tiny.encoder.onnx \
63
+ --decoder_model sam2_hiera_tiny/sam2_hiera_tiny.decoder.onnx \
64
+ --image photo.jpg \
65
+ --prompt prompt.json \
66
+ --output result.png \
67
+ --sam_variant sam2
68
+ ```
69
+
70
+ ## Re-export from Source
71
+
72
+ To re-export or customize the models using [samexporter](https://github.com/vietanhdev/samexporter):
73
+
74
+ ```bash
75
+ pip install samexporter
76
+ pip install git+https://github.com/facebookresearch/segment-anything-2.git
77
+
78
+ # Download SAM 2 checkpoints
79
+ cd original_models && bash download_sam2.sh && cd ..
80
+
81
+ # Export Tiny variant
82
+ python -m samexporter.export_sam2 \
83
+ --checkpoint original_models/sam2_hiera_tiny.pt \
84
+ --output_encoder output_models/sam2_hiera_tiny.encoder.onnx \
85
+ --output_decoder output_models/sam2_hiera_tiny.decoder.onnx \
86
+ --model_type sam2_hiera_tiny
87
+
88
+ # Or convert all SAM 2 variants at once:
89
+ bash convert_all_meta_sam2.sh
90
+ ```
91
+
92
+ ## Related Repositories
93
+
94
+ | Repo | Description |
95
+ |------|-------------|
96
+ | [vietanhdev/samexporter](https://github.com/vietanhdev/samexporter) | Export scripts, inference code, conversion tools |
97
+ | [vietanhdev/anylabeling](https://github.com/vietanhdev/anylabeling) | Desktop annotation app powered by these models |
98
+ | [vietanhdev/segment-anything-2.1-onnx-models](https://huggingface.co/vietanhdev/segment-anything-2.1-onnx-models) | Improved SAM 2.1 ONNX models |
99
+ | [facebookresearch/segment-anything-2](https://github.com/facebookresearch/segment-anything-2) | Original SAM 2 by Meta |
100
+
101
+ ## License
102
+
103
+ The ONNX models are derived from Meta's SAM 2, released under the **Apache 2.0** license.
104
+ The export code is part of [samexporter](https://github.com/vietanhdev/samexporter), released under the **MIT** license.