Instructions to use facebook/sam-vit-huge with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use facebook/sam-vit-huge with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("mask-generation", model="facebook/sam-vit-huge")# Load model directly from transformers import AutoProcessor, AutoModelForMaskGeneration processor = AutoProcessor.from_pretrained("facebook/sam-vit-huge") model = AutoModelForMaskGeneration.from_pretrained("facebook/sam-vit-huge") - Notebooks
- Google Colab
- Kaggle
RankSEG Integration: Optimize SAM masks by replacing thresholding
Hi SAM community,
I've integrated RankSEG with SAM to optimize mask predictions for Dice/IoU metrics. Sharing here as it might be useful for others working on segmentation tasks where these metrics matter.
Background
SAM produces excellent probability masks, but the standard approach is to threshold at 0.5. For tasks where Dice/IoU scores are critical (medical imaging, instance segmentation evaluation, etc.), we can do better by directly optimizing for these metrics.
RankSEG provides a principled way to convert SAM's probability outputs into masks that maximize Dice or IoU scores.
When is this helpful?
- You need to maximize Dice/IoU for downstream evaluation
- Working with medical imaging where metric precision matters
- Comparing against ground truth with specific metric requirements
- Fine-tuning segmentation quality for production systems
More Information
This is based on our JMLR and NeurIPS papers on statistically consistent segmentation. The key insight: simple thresholding doesn't optimize for the metrics we care about (Dice, IoU), so we solve the direct optimization problem instead.
Links:
- GitHub: https://github.com/rankseg/rankseg
- Demo: https://huggingface.co/spaces/statmlben/rankseg
- Docs: https://rankseg.readthedocs.io
- Papers: JMLR | NeurIPS
Happy to discuss or answer questions!
