SAM3 — Segment Anything Model 3
This is an ungated mirror of Meta's SAM3 model weights, redistributed under the SAM License for easier access.
What is SAM3?
SAM3 (Segment Anything Model 3) is Meta AI's state-of-the-art foundation model for promptable segmentation in images and videos. It supports:
- Text prompts — segment objects by describing them (e.g. "the dog")
- Bounding box prompts — precise region targeting
- Point prompts — click-to-segment
- Video tracking — track and segment objects across video frames
- Automatic mask generation — segment all objects in a scene
Files
| File | Description |
|---|---|
sam3.safetensors |
Model weights (safetensors format, converted from official .pt) |
LICENSE |
SAM License (required for redistribution) |
⚠️ Conversion Note
The sam3.safetensors in this repo is converted directly from the official sam3.pt checkpoint using:
import torch
from safetensors.torch import save_file
state_dict = torch.load("sam3.pt", map_location="cpu")
save_file(state_dict, "sam3.safetensors")
This preserves the original key names (detector.*, tracker.*). Do not use the HuggingFace Transformers-format safetensors from facebook/sam3 — those have different key names (detector_model.*) and are incompatible with build_sam3_video_model().
Usage
# With the official sam3 package
pip install sam3
from sam3.model_builder import build_sam3_image_model
model = build_sam3_image_model(checkpoint_path="sam3.safetensors")
License
This model is distributed under the SAM License — see the LICENSE file. Key points:
- ✅ Commercial use permitted
- ✅ Redistribution permitted (with license included)
- ✅ Derivative works permitted
- ❌ No military/warfare, nuclear, or espionage use
- ❌ No reverse engineering
Credits
- Original model by: Meta AI (FAIR)
- Original HuggingFace repo: facebook/sam3
- Paper: Segment Anything Model 3 (submitted to ICLR 2026)
- Redistributed by: Æmotion Studio for use with ComfyUI-FFMPEGA
- Downloads last month
- 30
Model tree for AEmotionStudio/sam3
Base model
facebook/sam3