Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

aimi-models
/
sam2

ONNX
sam2
onnxruntime
segment-anything
mirror
Model card Files Files and versions
xet
Community

Instructions to use aimi-models/sam2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sam2

    How to use aimi-models/sam2 with sam2:

    # Use SAM2 with images
    import torch
    from sam2.sam2_image_predictor import SAM2ImagePredictor
    
    predictor = SAM2ImagePredictor.from_pretrained(aimi-models/sam2)
    
    with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
        predictor.set_image(<your_image>)
        masks, _, _ = predictor.predict(<input_prompts>)
    # Use SAM2 with videos
    import torch
    from sam2.sam2_video_predictor import SAM2VideoPredictor
    
    predictor = SAM2VideoPredictor.from_pretrained(aimi-models/sam2)
    
    with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
        state = predictor.init_state(<your_video>)
    
        # add new prompts and instantly get the output on the same frame
        frame_idx, object_ids, masks = predictor.add_new_points(state, <your_prompts>):
    
        # propagate the prompts to get masklets throughout the video
        for frame_idx, object_ids, masks in predictor.propagate_in_video(state):
            ...
  • Notebooks
  • Google Colab
  • Kaggle
sam2
515 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
woerns's picture
woerns
Mirror SAM2 ONNX exports - Stage 6
e302cda verified 15 days ago
  • hiera-base-plus
    Mirror SAM2 ONNX exports - Stage 6 15 days ago
  • hiera-tiny
    Mirror SAM2 ONNX exports - Stage 6 15 days ago
  • .gitattributes
    1.52 kB
    initial commit 15 days ago
  • README.md
    1.55 kB
    Mirror SAM2 ONNX exports - Stage 6 15 days ago