MatAnyone2 / README.md
PeiqingYang's picture
Update README.md (#1)
40c894a
metadata
library_name: matanyone2
tags:
  - model_hub_mixin
  - pytorch_model_hub_mixin
MatAnyone Logo

Scaling Video Matting via a Learned Quality Evaluator

1S-Lab, Nanyang Technological University  2SenseTime Research, Singapore 
Project lead

MatAnyone 2 is a practical human video matting framework that preserves fine details by avoiding segmentation-like boundaries, while also shows enhanced robustness under challenging real-world conditions.

🎥 For more visual results, go checkout our project page


How to use

you can run the following commands to get started and start working with the model

pip install -qqU git+https://github.com/pq-yang/MatAnyone2.git
from matanyone2 import MatAnyone2, InferenceCore
model = MatAnyone2.from_pretrained("PeiqingYang/MatAnyone2")
processor = InferenceCore(model,device="cuda:0")
# inference
processor.process_video(input_path="inputs/video/test-sample2.mp4",
                        mask_path="inputs/mask/test-sample2.png",
                        output_path="results")