Update README.md
Browse files
README.md
CHANGED
|
@@ -8,32 +8,56 @@ base_model:
|
|
| 8 |
|
| 9 |
# SDMatte - SafeTensors Models for Interactive Matting
|
| 10 |
|
| 11 |
-
This repository
|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
-
|
| 20 |
-
- **Diffusion-powered**: Utilizes diffusion model priors for superior detail extraction
|
| 21 |
-
- **Interactive matting**: Visual prompt-driven interaction for precise control
|
| 22 |
-
- **Fine-grained details**: Excels at capturing complex edge regions and texture details
|
| 23 |
-
- **Coordinate & opacity awareness**: Enhanced spatial and opacity information processing
|
| 24 |
|
| 25 |
-
|
|
|
|
| 26 |
|
| 27 |
-
|
| 28 |
-
- **SDMatte_plus.safetensors** - Enhanced version with improved performance
|
| 29 |
|
| 30 |
-
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
-
### Abstract
|
| 39 |
-
*Recent interactive matting methods have shown satisfactory performance in capturing the primary regions of objects, but they fall short in extracting fine-grained details in edge regions. Diffusion models trained on billions of image-text pairs, demonstrate exceptional capability in modeling highly complex data distributions and synthesizing realistic texture details, while exhibiting robust text-driven interaction capabilities, making them an attractive solution for interactive matting.*
|
|
|
|
| 8 |
|
| 9 |
# SDMatte - SafeTensors Models for Interactive Matting
|
| 10 |
|
| 11 |
+
This repository provides **SafeTensors** versions of the SDMatte models for **interactive image matting**, optimized for seamless use with **ComfyUI**.
|
| 12 |
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
## π About SDMatte
|
| 16 |
+
|
| 17 |
+
**SDMatte: Grafting Diffusion Models for Interactive Matting** is a state-of-the-art model that leverages the power of **diffusion priors** to achieve high-precision matting β especially around fine details and complex edges.
|
| 18 |
|
| 19 |
+
### β¨ Key Features
|
| 20 |
|
| 21 |
+
- **Diffusion-Powered**: Uses strong priors from diffusion models to extract high-fidelity details
|
| 22 |
+
- **Interactive Matting**: Visual prompt-driven control for intuitive editing
|
| 23 |
+
- **Edge & Texture Focus**: Excels in handling challenging edge regions and fine textures
|
| 24 |
+
- **Coordinate & Opacity Awareness**: Improves matting accuracy with spatial and opacity context
|
| 25 |
+
|
| 26 |
+
---
|
| 27 |
+
|
| 28 |
+
## π¦ Available Models
|
| 29 |
+
|
| 30 |
+
- `SDMatte.safetensors` β Standard version for interactive matting
|
| 31 |
+
- `SDMatte_plus.safetensors` β Enhanced version with improved performance
|
| 32 |
+
|
| 33 |
+
---
|
| 34 |
|
| 35 |
+
## π§© Built for ComfyUI: `ComfyUI-RMBG`
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
+
These models are designed for use with our **ComfyUI custom node**:
|
| 38 |
+
β‘οΈ [ComfyUI-RMBG on GitHub](https://github.com/1038lab/ComfyUI-RMBG)
|
| 39 |
|
| 40 |
+
This custom node integrates SDMatte into ComfyUI workflows, enabling high-quality interactive matting inside a visual pipeline.
|
|
|
|
| 41 |
|
| 42 |
+
### π Latest Update
|
| 43 |
+
**Version:** `v2.9.0`
|
| 44 |
+
**Date:** `2025-08-18`
|
| 45 |
+
π [Read the update changelog](https://github.com/1038lab/ComfyUI-RMBG/blob/main/update.md#v290-20250818)
|
| 46 |
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
## π Credits and Attribution
|
| 50 |
+
|
| 51 |
+
### π Original Work
|
| 52 |
+
|
| 53 |
+
- **Authors**: vivoCameraResearch Team
|
| 54 |
+
- **Model Repository**: [Hugging Face β LongfeiHuang/SDMatte](https://huggingface.co/LongfeiHuang/SDMatte)
|
| 55 |
+
- **Official Code**: [GitHub β vivoCameraResearch/SDMatte](https://github.com/vivoCameraResearch/SDMatte)
|
| 56 |
+
- **Paper**: *SDMatte: Grafting Diffusion Models for Interactive Matting*
|
| 57 |
+
|
| 58 |
+
### π Abstract (from the original paper)
|
| 59 |
+
|
| 60 |
+
> Recent interactive matting methods have shown satisfactory performance in capturing the primary regions of objects, but they fall short in extracting fine-grained details in edge regions. Diffusion models trained on billions of image-text pairs demonstrate exceptional capability in modeling highly complex data distributions and synthesizing realistic texture details, while exhibiting robust text-driven interaction capabilities β making them an attractive solution for interactive matting.
|
| 61 |
+
|
| 62 |
+
---
|
| 63 |
|
|
|
|
|
|