SAM 2.1 Hiera-Large β INT8 Quantized
Meta's Segment Anything Model 2.1 (Hiera-Large backbone) quantized to INT8 for real-time robotic segmentation. 1.7x smaller β from 1.7 GB to 1.0 GB β with both image and video segmentation capabilities preserved.
This model is part of the RobotFlowLabs model library, built for the ANIMA agentic robotics platform β a modular ROS2-native AI system that brings foundation model intelligence to real robots operating in the real world.
Why This Model Exists
Robotic manipulation and navigation require pixel-precise understanding of the scene. SAM2 is the state-of-the-art for promptable segmentation β given a point, box, or mask prompt, it segments any object in images or tracks it through video. But at 1.7 GB, deploying SAM2 alongside other perception models on edge hardware eats precious VRAM.
We quantized SAM2.1 to INT8 and exported weights in SafeTensors format so robots can run segmentation in real-time alongside depth estimation, feature extraction, and action generation β all on a single edge GPU.
Model Details
| Property | Value |
|---|---|
| Architecture | Hiera-Large vision backbone + SAM2 decoder |
| Input Resolution | 1024 Γ 1024 |
| Capabilities | Image segmentation, video object tracking |
| Mask Decoder | 256-dim hidden, 8 attention heads, 3 multi-mask outputs |
| Memory Attention | 4 layers, 2048-dim FFN, RoPE positional encoding |
| Memory Bank | 7 frames temporal context |
| Original Model | facebook/sam2.1-hiera-large |
| License | Apache-2.0 |
Compression Results
Quantized on an NVIDIA L4 24GB GPU using INT8 dynamic quantization with SafeTensors export.
| Metric | Original | INT8 Quantized | Change |
|---|---|---|---|
| Total Size | 1,713 MB | 1,038 MB | 1.7x smaller |
| INT8 Weights | β | 211 MB | Quantized linear layers |
| SafeTensors | β | 828 MB | Full model weights |
| Quantization | FP32 | INT8 Dynamic | Per-tensor symmetric |
| Format | PyTorch | SafeTensors + INT8 .pt | Dual format |
Why SafeTensors instead of ONNX? SAM2 uses custom CUDA operations (roi_align, deformable attention) that aren't supported by the ONNX standard. SafeTensors provides fast, safe loading directly into PyTorch with zero-copy memory mapping.
Included Files
sam2.1-hiera-large-int8/
βββ model_int8.pt # 211 MB β INT8 quantized state dict
βββ model.safetensors # 828 MB β Full model in SafeTensors format
βββ config.json # Model configuration
βββ preprocessor_config.json # Image preprocessing config
βββ README.md # This file
Quick Start
PyTorch (SafeTensors)
from transformers import Sam2Model, Sam2Processor
import torch
# Load with SafeTensors (automatic)
model = Sam2Model.from_pretrained("robotflowlabs/sam2.1-hiera-large-int8")
processor = Sam2Processor.from_pretrained("facebook/sam2.1-hiera-large")
model.to("cuda").eval()
# Segment with point prompt
inputs = processor(
images=image,
input_points=[[[500, 375]]], # (x, y) point prompt
return_tensors="pt"
).to("cuda")
with torch.no_grad():
outputs = model(**inputs)
masks = processor.post_process_masks(
outputs.pred_masks,
inputs["original_sizes"],
inputs["reshaped_input_sizes"]
)
INT8 Weights (Maximum Compression)
import torch
from transformers import Sam2Model
# Load architecture, then apply INT8 weights
model = Sam2Model.from_pretrained("facebook/sam2.1-hiera-large")
int8_state = torch.load("model_int8.pt", map_location="cuda", weights_only=True)
model.load_state_dict(int8_state, strict=False)
With FORGE (ANIMA Integration)
from forge.vision import VisionEncoderRegistry
# FORGE handles optimal loading and batching
segmenter = VisionEncoderRegistry.load("sam2.1-hiera-large-int8")
masks = segmenter.segment(image, points=[[500, 375]])
Use Cases in ANIMA
SAM2 is the segmentation backbone across multiple ANIMA modules:
- Object Isolation β Segment graspable objects from cluttered scenes for manipulation planning
- Workspace Mapping β Identify free space, obstacles, and surfaces for navigation
- Video Tracking β Track objects across frames during manipulation sequences (7-frame temporal memory)
- Safety Zones β Segment human body parts and keep-out regions for safe human-robot collaboration
- Instance Separation β Distinguish individual objects when multiple similar items are present
- Bin Picking β Segment individual parts from a bin for industrial pick-and-place
SAM2 Model Family
We provide all three SAM2.1 variants, optimized for different deployment scenarios:
| Model | Params | Size | Speed | Best For |
|---|---|---|---|---|
| sam2.1-hiera-large-int8 | Large | 1.0 GB | Highest quality | Research, high-accuracy tasks |
| sam2.1-hiera-small-int8 | Small | 186 MB | Balanced | Production robotics |
| sam2.1-hiera-tiny-int8 | Tiny | 152 MB | Fastest | Real-time edge, Jetson Nano |
About ANIMA
ANIMA is a modular, ROS2-native agentic robotics platform developed by RobotFlowLabs. It combines 58 specialized AI modules β from perception and planning to manipulation and safety β into a unified system that enables robots to understand, reason, and act in unstructured real-world environments.
Every foundation model in ANIMA must run on edge hardware (Jetson Orin, industrial PCs) under real-time constraints. That's why we built FORGE β our compression and distillation pipeline β and why we're releasing optimized model variants publicly.
We believe the robotics community deserves production-ready models, not just research checkpoints.
Other Collections
- ANIMA Vision β SAM2, DINOv2, CLIP, SigLIP, Depth Anything
- ANIMA Language β Qwen2.5, SmolLM2
- ANIMA VLM β Qwen2.5-VL
- ANIMA VLA β SmolVLA, RDT2-FM, FORGE students
Intended Use
Designed For
- Promptable segmentation in robotic manipulation pipelines
- Video object tracking during multi-step tasks
- Instance segmentation for bin picking and object isolation
- Real-time scene parsing on edge GPUs (Jetson Orin, L4)
Limitations
- INT8 quantization may slightly reduce mask boundary precision on very fine structures
- Video tracking requires sequential frame processing (not parallelizable)
- Requires a prompt (point, box, or mask) β not a panoptic segmenter
- Inherits biases from SA-V dataset (primarily indoor/outdoor natural scenes)
Out of Scope
- Medical image segmentation without domain-specific validation
- Autonomous driving perception (not trained on driving data)
- Surveillance or tracking of individuals
Technical Details
Compression Pipeline
Original SAM2.1 Hiera-Large (FP32, 1.7 GB)
β
βββ torchao INT8 dynamic quantization (GPU-native)
β βββ model_int8.pt (211 MB)
β
βββ SafeTensors export (roi_align not ONNX-compatible)
βββ model.safetensors (828 MB)
- Quantization: INT8 dynamic activation + INT8 weight via
torchaoon NVIDIA L4 GPU - Export: SafeTensors format β zero-copy memory mapping, fast loading, framework-agnostic
- Why not ONNX: SAM2's roi_align and deformable attention are custom CUDA ops that ONNX opset 18 cannot represent
- Hardware: NVIDIA L4 24GB, CUDA 13.0, PyTorch 2.10, Python 3.14
Attribution
- Original Model:
facebook/sam2.1-hiera-largeby Meta AI (FAIR) - License: Apache-2.0 β free for commercial and research use
- Paper: SAM 2: Segment Anything in Images and Videos β Ravi et al., 2024
- Dataset: SA-V β 50.9K videos, 642.6K masklets
- Compressed by: RobotFlowLabs using FORGE
Citation
@article{ravi2024sam2,
title={SAM 2: Segment Anything in Images and Videos},
author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolber, Chloe and Gustafson, Laura and others},
journal={arXiv preprint arXiv:2408.00714},
year={2024}
}
@misc{robotflowlabs2026anima,
title={ANIMA: Agentic Networked Intelligence for Modular Autonomy},
author={RobotFlowLabs},
year={2026},
url={https://huggingface.co/robotflowlabs}
}
Built with FORGE by RobotFlowLabs
Optimizing foundation models for real robots.
- Downloads last month
- 12
Model tree for robotflowlabs/sam2.1-hiera-large-int8
Base model
facebook/sam2.1-hiera-largeCollection including robotflowlabs/sam2.1-hiera-large-int8
Paper for robotflowlabs/sam2.1-hiera-large-int8
Evaluation results
- Model Size (MB)self-reported1038.000
- Compression Ratioself-reported1.700
- Original Size (MB)self-reported1713.000