license: mit
pipeline_tag: image-to-image
library_name: diffusers
base_model:
- stabilityai/stable-diffusion-2
SDMatte - SafeTensors Models for Interactive Matting
This repository contains SafeTensors versions of the SDMatte models for interactive image matting, optimized for ComfyUI usage.
About SDMatte
SDMatte: Grafting Diffusion Models for Interactive Matting
SDMatte is a state-of-the-art diffusion-driven interactive matting model that leverages the powerful priors of diffusion models to achieve exceptional performance in extracting fine-grained details, especially in edge regions.
Key Features
- Diffusion-powered: Utilizes diffusion model priors for superior detail extraction
- Interactive matting: Visual prompt-driven interaction for precise control
- Fine-grained details: Excels at capturing complex edge regions and texture details
- Coordinate & opacity awareness: Enhanced spatial and opacity information processing
Available Models
- SDMatte.safetensors - Standard interactive matting model
- SDMatte_plus.safetensors - Enhanced version with improved performance
Credits and Attribution
Original Work
Authors: vivoCameraResearch Team
Original Repository: https://huggingface.co/LongfeiHuang/SDMatte
Official Code: https://github.com/vivoCameraResearch/SDMatte
Paper: SDMatte: Grafting Diffusion Models for Interactive Matting
Abstract
Recent interactive matting methods have shown satisfactory performance in capturing the primary regions of objects, but they fall short in extracting fine-grained details in edge regions. Diffusion models trained on billions of image-text pairs, demonstrate exceptional capability in modeling highly complex data distributions and synthesizing realistic texture details, while exhibiting robust text-driven interaction capabilities, making them an attractive solution for interactive matting.