Image Segmentation
Transformers
Safetensors
sam2
sam2_video
feature-extraction
robotics
edge-deployment
anima
forge
int8
quantized
segmentation
video-segmentation
ros2
jetson
real-time
vision
Eval Results (legacy)
Instructions to use robotflowlabs/sam2.1-hiera-tiny-int8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use robotflowlabs/sam2.1-hiera-tiny-int8 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-segmentation", model="robotflowlabs/sam2.1-hiera-tiny-int8")# Load model directly from transformers import AutoImageProcessor, AutoModel processor = AutoImageProcessor.from_pretrained("robotflowlabs/sam2.1-hiera-tiny-int8") model = AutoModel.from_pretrained("robotflowlabs/sam2.1-hiera-tiny-int8") - sam2
How to use robotflowlabs/sam2.1-hiera-tiny-int8 with sam2:
# Use SAM2 with images import torch from sam2.sam2_image_predictor import SAM2ImagePredictor predictor = SAM2ImagePredictor.from_pretrained(robotflowlabs/sam2.1-hiera-tiny-int8) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): predictor.set_image(<your_image>) masks, _, _ = predictor.predict(<input_prompts>)# Use SAM2 with videos import torch from sam2.sam2_video_predictor import SAM2VideoPredictor predictor = SAM2VideoPredictor.from_pretrained(robotflowlabs/sam2.1-hiera-tiny-int8) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): state = predictor.init_state(<your_video>) # add new prompts and instantly get the output on the same frame frame_idx, object_ids, masks = predictor.add_new_points(state, <your_prompts>): # propagate the prompts to get masklets throughout the video for frame_idx, object_ids, masks in predictor.propagate_in_video(state): ... - Notebooks
- Google Colab
- Kaggle
Ctrl+K