ilessio-aiflowlab commited on
Commit
3292c78
·
verified ·
1 Parent(s): a08720e

Upload folder using huggingface_hub

Browse files
Files changed (5) hide show
  1. README.md +234 -0
  2. config.json +233 -0
  3. model.safetensors +3 -0
  4. model_int8.pt +3 -0
  5. preprocessor_config.json +36 -0
README.md ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: facebook/sam2.1-hiera-tiny
4
+ tags:
5
+ - robotics
6
+ - edge-deployment
7
+ - anima
8
+ - forge
9
+ - int8
10
+ - quantized
11
+ - sam2
12
+ - segmentation
13
+ - image-segmentation
14
+ - video-segmentation
15
+ - ros2
16
+ - jetson
17
+ - real-time
18
+ - vision
19
+ library_name: transformers
20
+ pipeline_tag: image-segmentation
21
+ model-index:
22
+ - name: sam2.1-hiera-tiny-int8
23
+ results:
24
+ - task:
25
+ type: image-segmentation
26
+ metrics:
27
+ - name: Model Size (MB)
28
+ type: model_size
29
+ value: 152
30
+ - name: Compression Ratio
31
+ type: compression
32
+ value: 2.0
33
+ - name: Original Size (MB)
34
+ type: original_size
35
+ value: 298
36
+ ---
37
+
38
+ # SAM 2.1 Hiera-Tiny — INT8 Quantized
39
+
40
+ > Meta's Segment Anything Model 2.1 (Hiera-Tiny backbone) quantized to INT8 for real-time robotic segmentation. **2.0x smaller** — from 298 MB to 152 MB — the smallest SAM2 variant for maximum speed on edge hardware.
41
+
42
+ This model is part of the **[RobotFlowLabs](https://huggingface.co/robotflowlabs)** model library, built for the **ANIMA** agentic robotics platform — a modular ROS2-native AI system that brings foundation model intelligence to real robots operating in the real world.
43
+
44
+ ## Why This Model Exists
45
+
46
+ When every millisecond counts — grasping a moving object, dodging an obstacle, responding to a human — you need the fastest possible segmentation. SAM2 Hiera-Tiny is the lightest SAM2 backbone, and at 152 MB after INT8 quantization, it fits comfortably alongside multiple other perception models on devices like Jetson Nano or Orin NX.
47
+
48
+ ## Model Details
49
+
50
+ | Property | Value |
51
+ |----------|-------|
52
+ | **Architecture** | Hiera-Tiny vision backbone + SAM2 decoder |
53
+ | **Input Resolution** | 1024 × 1024 |
54
+ | **Capabilities** | Image segmentation, video object tracking |
55
+ | **Backbone Stages** | 4 stages: [1, 2, 7, 2] blocks (12 total) |
56
+ | **Embed Dims** | [96, 192, 384, 768] per stage |
57
+ | **Attention Heads** | [1, 2, 4, 8] per stage |
58
+ | **Global Attention** | Blocks 5, 7, 9 |
59
+ | **Mask Decoder** | 256-dim hidden, 8 attention heads, 3 multi-mask outputs |
60
+ | **Memory Attention** | 4 layers, 2048-dim FFN, RoPE positional encoding |
61
+ | **Memory Bank** | 7 frames temporal context |
62
+ | **Original Model** | [`facebook/sam2.1-hiera-tiny`](https://huggingface.co/facebook/sam2.1-hiera-tiny) |
63
+ | **License** | Apache-2.0 |
64
+
65
+ ## Compression Results
66
+
67
+ Quantized on an NVIDIA L4 24GB GPU using INT8 dynamic quantization with SafeTensors export.
68
+
69
+ | Metric | Original | INT8 Quantized | Change |
70
+ |--------|----------|----------------|--------|
71
+ | **Total Size** | 298 MB | 152 MB | **2.0x smaller** |
72
+ | **INT8 Weights** | — | 32 MB | Quantized linear layers |
73
+ | **SafeTensors** | — | 120 MB | Full model weights |
74
+ | **Quantization** | FP32 | INT8 Dynamic | Per-tensor symmetric |
75
+ | **Format** | PyTorch | SafeTensors + INT8 .pt | Dual format |
76
+
77
+ > **Why SafeTensors instead of ONNX?** SAM2 uses custom CUDA operations (roi_align, deformable attention) that aren't supported by the ONNX standard. SafeTensors provides fast, safe loading directly into PyTorch with zero-copy memory mapping.
78
+
79
+ ## Included Files
80
+
81
+ ```
82
+ sam2.1-hiera-tiny-int8/
83
+ ├── model_int8.pt # 32 MB — INT8 quantized state dict
84
+ ├── model.safetensors # 120 MB — Full model in SafeTensors format
85
+ ├── config.json # Model configuration
86
+ ├── preprocessor_config.json # Image preprocessing config
87
+ └── README.md # This file
88
+ ```
89
+
90
+ ## Quick Start
91
+
92
+ ### PyTorch (SafeTensors)
93
+
94
+ ```python
95
+ from transformers import Sam2Model, Sam2Processor
96
+ import torch
97
+
98
+ # Load with SafeTensors (automatic)
99
+ model = Sam2Model.from_pretrained("robotflowlabs/sam2.1-hiera-tiny-int8")
100
+ processor = Sam2Processor.from_pretrained("facebook/sam2.1-hiera-tiny")
101
+
102
+ model.to("cuda").eval()
103
+
104
+ # Segment with point prompt
105
+ inputs = processor(
106
+ images=image,
107
+ input_points=[[[500, 375]]], # (x, y) point prompt
108
+ return_tensors="pt"
109
+ ).to("cuda")
110
+
111
+ with torch.no_grad():
112
+ outputs = model(**inputs)
113
+
114
+ masks = processor.post_process_masks(
115
+ outputs.pred_masks,
116
+ inputs["original_sizes"],
117
+ inputs["reshaped_input_sizes"]
118
+ )
119
+ ```
120
+
121
+ ### INT8 Weights (Maximum Compression)
122
+
123
+ ```python
124
+ import torch
125
+ from transformers import Sam2Model
126
+
127
+ # Load architecture, then apply INT8 weights
128
+ model = Sam2Model.from_pretrained("facebook/sam2.1-hiera-tiny")
129
+ int8_state = torch.load("model_int8.pt", map_location="cuda", weights_only=True)
130
+ model.load_state_dict(int8_state, strict=False)
131
+ ```
132
+
133
+ ### With FORGE (ANIMA Integration)
134
+
135
+ ```python
136
+ from forge.vision import VisionEncoderRegistry
137
+
138
+ # FORGE handles optimal loading and batching
139
+ segmenter = VisionEncoderRegistry.load("sam2.1-hiera-tiny-int8")
140
+ masks = segmenter.segment(image, points=[[500, 375]])
141
+ ```
142
+
143
+ ## Use Cases in ANIMA
144
+
145
+ SAM2-Tiny is optimized for **latency-critical deployments**:
146
+
147
+ - **Real-Time Grasping** — Fastest segmentation for time-critical manipulation
148
+ - **Mobile Robots** — Lightweight enough for Jetson Nano-class devices
149
+ - **Multi-Model Stacking** — Leaves maximum VRAM for other perception models
150
+ - **Video Tracking** — Track objects across frames with 7-frame temporal memory
151
+ - **High-Frequency Control** — Segmentation at camera framerate for reactive behavior
152
+
153
+ ## SAM2 Model Family
154
+
155
+ We provide all three SAM2.1 variants, optimized for different deployment scenarios:
156
+
157
+ | Model | Size | Speed | Best For |
158
+ |-------|------|-------|----------|
159
+ | [sam2.1-hiera-large-int8](https://huggingface.co/robotflowlabs/sam2.1-hiera-large-int8) | 1.0 GB | Highest quality | Research, high-accuracy tasks |
160
+ | [sam2.1-hiera-small-int8](https://huggingface.co/robotflowlabs/sam2.1-hiera-small-int8) | 186 MB | Balanced | Production robotics |
161
+ | **[sam2.1-hiera-tiny-int8](https://huggingface.co/robotflowlabs/sam2.1-hiera-tiny-int8)** | **152 MB** | **Fastest** | **Real-time edge, Jetson Nano** |
162
+
163
+ ## Intended Use
164
+
165
+ ### Designed For
166
+ - Lowest-latency segmentation in robotic control loops
167
+ - Edge devices with limited VRAM (Jetson Nano, Orin NX)
168
+ - Multi-model inference stacks where VRAM is shared
169
+ - Real-time video object tracking
170
+
171
+ ### Limitations
172
+ - Smaller backbone means lower accuracy on complex scenes vs Large/Small variants
173
+ - INT8 quantization may slightly reduce mask boundary precision
174
+ - Requires a prompt (point, box, or mask) — not a panoptic segmenter
175
+ - Inherits biases from SA-V dataset
176
+
177
+ ### Out of Scope
178
+ - Medical image segmentation without domain-specific validation
179
+ - Autonomous driving perception
180
+ - Surveillance or tracking of individuals
181
+
182
+ ## Technical Details
183
+
184
+ ### Compression Pipeline
185
+
186
+ ```
187
+ Original SAM2.1 Hiera-Tiny (FP32, 298 MB)
188
+
189
+ ├─→ torchao INT8 dynamic quantization (GPU-native)
190
+ │ └─→ model_int8.pt (32 MB)
191
+
192
+ └─→ SafeTensors export (roi_align not ONNX-compatible)
193
+ └─→ model.safetensors (120 MB)
194
+ ```
195
+
196
+ - **Quantization**: INT8 dynamic activation + INT8 weight via `torchao` on NVIDIA L4 GPU
197
+ - **Export**: SafeTensors format — zero-copy memory mapping, fast loading
198
+ - **Why not ONNX**: SAM2's roi_align and deformable attention are custom CUDA ops
199
+ - **Hardware**: NVIDIA L4 24GB, CUDA 13.0, PyTorch 2.10, Python 3.14
200
+
201
+ ## Attribution
202
+
203
+ - **Original Model**: [`facebook/sam2.1-hiera-tiny`](https://huggingface.co/facebook/sam2.1-hiera-tiny) by Meta AI (FAIR)
204
+ - **License**: [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) — free for commercial and research use
205
+ - **Paper**: [SAM 2: Segment Anything in Images and Videos](https://arxiv.org/abs/2408.00714) — Ravi et al., 2024
206
+ - **Dataset**: SA-V — 50.9K videos, 642.6K masklets
207
+ - **Compressed by**: [RobotFlowLabs](https://huggingface.co/robotflowlabs) using [FORGE](https://github.com/robotflowlabs/forge)
208
+
209
+ ## Citation
210
+
211
+ ```bibtex
212
+ @article{ravi2024sam2,
213
+ title={SAM 2: Segment Anything in Images and Videos},
214
+ author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolber, Chloe and Gustafson, Laura and others},
215
+ journal={arXiv preprint arXiv:2408.00714},
216
+ year={2024}
217
+ }
218
+ ```
219
+
220
+ ```bibtex
221
+ @misc{robotflowlabs2026anima,
222
+ title={ANIMA: Agentic Networked Intelligence for Modular Autonomy},
223
+ author={RobotFlowLabs},
224
+ year={2026},
225
+ url={https://huggingface.co/robotflowlabs}
226
+ }
227
+ ```
228
+
229
+ ---
230
+
231
+ <p align="center">
232
+ <b>Built with FORGE by <a href="https://huggingface.co/robotflowlabs">RobotFlowLabs</a></b><br>
233
+ Optimizing foundation models for real robots.
234
+ </p>
config.json ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Sam2VideoModel"
4
+ ],
5
+ "enable_occlusion_spatial_embedding": true,
6
+ "enable_temporal_pos_encoding_for_object_pointers": true,
7
+ "image_size": 1024,
8
+ "initializer_range": 0.02,
9
+ "mask_decoder_config": {
10
+ "attention_downsample_rate": 2,
11
+ "dynamic_multimask_stability_delta": 0.05,
12
+ "dynamic_multimask_stability_thresh": 0.98,
13
+ "dynamic_multimask_via_stability": true,
14
+ "hidden_act": "gelu",
15
+ "hidden_size": 256,
16
+ "iou_head_depth": 3,
17
+ "iou_head_hidden_dim": 256,
18
+ "mlp_dim": 2048,
19
+ "model_type": "",
20
+ "num_attention_heads": 8,
21
+ "num_hidden_layers": 2,
22
+ "num_multimask_outputs": 3
23
+ },
24
+ "mask_downsampler_embed_dim": 256,
25
+ "mask_downsampler_hidden_act": "gelu",
26
+ "mask_downsampler_kernel_size": 3,
27
+ "mask_downsampler_padding": 1,
28
+ "mask_downsampler_stride": 2,
29
+ "mask_downsampler_total_stride": 16,
30
+ "max_object_pointers_in_encoder": 16,
31
+ "memory_attention_downsample_rate": 1,
32
+ "memory_attention_dropout": 0.1,
33
+ "memory_attention_feed_forward_hidden_act": "relu",
34
+ "memory_attention_feed_forward_hidden_size": 2048,
35
+ "memory_attention_hidden_size": 256,
36
+ "memory_attention_num_attention_heads": 1,
37
+ "memory_attention_num_layers": 4,
38
+ "memory_attention_rope_dropout": 0.1,
39
+ "memory_attention_rope_feat_sizes": [
40
+ 64,
41
+ 64
42
+ ],
43
+ "memory_attention_rope_theta": 10000,
44
+ "memory_encoder_hidden_size": 256,
45
+ "memory_encoder_output_channels": 64,
46
+ "memory_fuser_embed_dim": 256,
47
+ "memory_fuser_hidden_act": "gelu",
48
+ "memory_fuser_intermediate_dim": 1024,
49
+ "memory_fuser_kernel_size": 7,
50
+ "memory_fuser_layer_scale_init_value": 1e-06,
51
+ "memory_fuser_num_layers": 2,
52
+ "memory_fuser_padding": 3,
53
+ "model_type": "sam2_video",
54
+ "multimask_max_pt_num": 1,
55
+ "multimask_min_pt_num": 0,
56
+ "multimask_output_for_tracking": true,
57
+ "multimask_output_in_sam": true,
58
+ "num_maskmem": 7,
59
+ "prompt_encoder_config": {
60
+ "hidden_act": "gelu",
61
+ "hidden_size": 256,
62
+ "image_size": 1024,
63
+ "layer_norm_eps": 1e-06,
64
+ "mask_input_channels": 16,
65
+ "model_type": "",
66
+ "num_point_embeddings": 4,
67
+ "patch_size": 16,
68
+ "scale": 1
69
+ },
70
+ "sigmoid_bias_for_mem_enc": -10.0,
71
+ "sigmoid_scale_for_mem_enc": 20.0,
72
+ "torch_dtype": "float32",
73
+ "transformers_version": "4.56.0.dev0",
74
+ "vision_config": {
75
+ "backbone_channel_list": [
76
+ 768,
77
+ 384,
78
+ 192,
79
+ 96
80
+ ],
81
+ "backbone_config": {
82
+ "_name_or_path": "",
83
+ "add_cross_attention": false,
84
+ "architectures": null,
85
+ "bad_words_ids": null,
86
+ "begin_suppress_tokens": null,
87
+ "blocks_per_stage": [
88
+ 1,
89
+ 2,
90
+ 7,
91
+ 2
92
+ ],
93
+ "bos_token_id": null,
94
+ "chunk_size_feed_forward": 0,
95
+ "cross_attention_hidden_size": null,
96
+ "decoder_start_token_id": null,
97
+ "diversity_penalty": 0.0,
98
+ "do_sample": false,
99
+ "early_stopping": false,
100
+ "embed_dim_per_stage": [
101
+ 96,
102
+ 192,
103
+ 384,
104
+ 768
105
+ ],
106
+ "encoder_no_repeat_ngram_size": 0,
107
+ "eos_token_id": null,
108
+ "exponential_decay_length_penalty": null,
109
+ "finetuning_task": null,
110
+ "forced_bos_token_id": null,
111
+ "forced_eos_token_id": null,
112
+ "global_attention_blocks": [
113
+ 5,
114
+ 7,
115
+ 9
116
+ ],
117
+ "hidden_act": "gelu",
118
+ "hidden_size": 96,
119
+ "id2label": {
120
+ "0": "LABEL_0",
121
+ "1": "LABEL_1"
122
+ },
123
+ "image_size": [
124
+ 1024,
125
+ 1024
126
+ ],
127
+ "initializer_range": 0.02,
128
+ "is_decoder": false,
129
+ "is_encoder_decoder": false,
130
+ "label2id": {
131
+ "LABEL_0": 0,
132
+ "LABEL_1": 1
133
+ },
134
+ "layer_norm_eps": 1e-06,
135
+ "length_penalty": 1.0,
136
+ "max_length": 20,
137
+ "min_length": 0,
138
+ "mlp_ratio": 4.0,
139
+ "model_type": "sam2_hiera_det_model",
140
+ "no_repeat_ngram_size": 0,
141
+ "num_attention_heads": 1,
142
+ "num_attention_heads_per_stage": [
143
+ 1,
144
+ 2,
145
+ 4,
146
+ 8
147
+ ],
148
+ "num_beam_groups": 1,
149
+ "num_beams": 1,
150
+ "num_channels": 3,
151
+ "num_query_pool_stages": 3,
152
+ "num_return_sequences": 1,
153
+ "output_attentions": false,
154
+ "output_hidden_states": false,
155
+ "output_scores": false,
156
+ "pad_token_id": null,
157
+ "patch_kernel_size": [
158
+ 7,
159
+ 7
160
+ ],
161
+ "patch_padding": [
162
+ 3,
163
+ 3
164
+ ],
165
+ "patch_stride": [
166
+ 4,
167
+ 4
168
+ ],
169
+ "prefix": null,
170
+ "problem_type": null,
171
+ "pruned_heads": {},
172
+ "query_stride": [
173
+ 2,
174
+ 2
175
+ ],
176
+ "remove_invalid_values": false,
177
+ "repetition_penalty": 1.0,
178
+ "return_dict": true,
179
+ "return_dict_in_generate": false,
180
+ "sep_token_id": null,
181
+ "suppress_tokens": null,
182
+ "task_specific_params": null,
183
+ "temperature": 1.0,
184
+ "tf_legacy_loss": false,
185
+ "tie_encoder_decoder": false,
186
+ "tie_word_embeddings": true,
187
+ "tokenizer_class": null,
188
+ "top_k": 50,
189
+ "top_p": 1.0,
190
+ "torch_dtype": null,
191
+ "torchscript": false,
192
+ "typical_p": 1.0,
193
+ "use_bfloat16": false,
194
+ "window_positional_embedding_background_size": [
195
+ 7,
196
+ 7
197
+ ],
198
+ "window_size_per_stage": [
199
+ 8,
200
+ 4,
201
+ 14,
202
+ 7
203
+ ]
204
+ },
205
+ "backbone_feature_sizes": [
206
+ [
207
+ 256,
208
+ 256
209
+ ],
210
+ [
211
+ 128,
212
+ 128
213
+ ],
214
+ [
215
+ 64,
216
+ 64
217
+ ]
218
+ ],
219
+ "fpn_hidden_size": 256,
220
+ "fpn_kernel_size": 1,
221
+ "fpn_padding": 0,
222
+ "fpn_stride": 1,
223
+ "fpn_top_down_levels": [
224
+ 2,
225
+ 3
226
+ ],
227
+ "hidden_act": "gelu",
228
+ "initializer_range": 0.02,
229
+ "layer_norm_eps": 1e-06,
230
+ "model_type": "sam2_vision_model",
231
+ "num_feature_levels": 3
232
+ }
233
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5f6e7ac602b6092644db96f5b31617616c4a240647a90af1ec466f1a9cd567e
3
+ size 125802100
model_int8.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30575abffcaed7e9e60f89ab47cd5b69177d1d185832c792eef374262f87bc20
3
+ size 33385680
preprocessor_config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": null,
3
+ "data_format": "channels_first",
4
+ "default_to_square": true,
5
+ "device": null,
6
+ "disable_grouping": null,
7
+ "do_center_crop": null,
8
+ "do_convert_rgb": true,
9
+ "do_normalize": true,
10
+ "do_rescale": true,
11
+ "do_resize": true,
12
+ "image_mean": [
13
+ 0.485,
14
+ 0.456,
15
+ 0.406
16
+ ],
17
+ "image_processor_type": "Sam2ImageProcessorFast",
18
+ "image_std": [
19
+ 0.229,
20
+ 0.224,
21
+ 0.225
22
+ ],
23
+ "input_data_format": null,
24
+ "mask_size": {
25
+ "height": 256,
26
+ "width": 256
27
+ },
28
+ "processor_class": "Sam2VideoProcessor",
29
+ "resample": 2,
30
+ "rescale_factor": 0.00392156862745098,
31
+ "return_tensors": null,
32
+ "size": {
33
+ "height": 1024,
34
+ "width": 1024
35
+ }
36
+ }