sam2.1_mlx / README.md
eisneim's picture
Upload folder using huggingface_hub
1b7b988 verified
metadata
language:
  - en
pipeline_tag: image-segmentation
tags:
  - sam2
  - segment-anything
  - mlx
  - apple-silicon
  - video-segmentation
  - object-tracking
license: apache-2.0

SAM2.1 MLX Weights

MLX-format weights for SAM2.1 (Segment Anything Model 2.1) ported to Apple MLX.

Quick Start

1. Clone the code:

git clone https://github.com/eisneim/sam2.1_mlx.git
cd sam2.1_mlx
pip install mlx opencv-python safetensors numpy

2. Download weights from this repo:

# Base Plus (recommended, best quality/speed balance)
huggingface-cli download eisneim/sam2.1_mlx_weights sam2.1_hiera_base_plus.safetensors --local-dir weights/

# Small (faster, slightly lower quality)
huggingface-cli download eisneim/sam2.1_mlx_weights sam2.1_hiera_small.safetensors --local-dir weights/

Or manually download the .safetensors files and place them in weights/.

3. Run:

# Video tracking — click on an object in the first frame
python inference_video.py -i your_video.mp4

# Image segmentation — click on an object
python inference_image.py -i your_image.jpg

# Use the small model
python inference_video.py -i your_video.mp4 --model small

Available Models

Model File Size Quality Speed
base_plus sam2.1_hiera_base_plus.safetensors ~300MB Best ~130 fps
small sam2.1_hiera_small.safetensors ~150MB Good ~200 fps

Converting Weights Yourself

If you prefer to convert from the original PyTorch checkpoints:

# Download PyTorch weights from Meta
wget https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt -P weights/

# Convert to MLX safetensors
python -m src.sam2.convert --src weights/sam2.1_hiera_base_plus.pt --dst weights/sam2.1_hiera_base_plus.safetensors

Links

License

Apache 2.0 (same as the original SAM2).