Update README.md

#1
by not-lain - opened
Files changed (1) hide show
  1. README.md +67 -4
README.md CHANGED
@@ -5,7 +5,70 @@ tags:
5
  - pytorch_model_hub_mixin
6
  ---
7
 
8
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
9
- - Code: https://github.com/pq-yang/MatAnyone2
10
- - Paper: [More Information Needed]
11
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - pytorch_model_hub_mixin
6
  ---
7
 
8
+ <div align="center">
9
+ <img src="https://github.com/pq-yang/MatAnyone2/blob/main/assets/matanyone2_logo.png?raw=true" alt="MatAnyone Logo" style="height: 52px;">
10
+ <div style="text-align: center">
11
+ <h2>Scaling Video Matting via a Learned Quality Evaluator</h2>
12
+ </div>
13
+
14
+ <div>
15
+ <a href='https://pq-yang.github.io/' target='_blank'>Peiqing Yang</a><sup>1</sup>&emsp;
16
+ <a href='https://shangchenzhou.com/' target='_blank'>Shangchen Zhou</a><sup>1†</sup>&emsp;
17
+ <a href="https://www.linkedin.com/in/kai-hao-794321382/" target='_blank'>Kai Hao</a><sup>1</sup>&emsp;
18
+ <a href="https://scholar.google.com.sg/citations?user=fMXnSGMAAAAJ&hl=en/" target='_blank'>Qingyi Tao</a><sup>2</sup>&emsp;
19
+ </div>
20
+ <div>
21
+ <sup>1</sup>S-Lab, Nanyang Technological University&emsp;
22
+ <sup>2</sup>SenseTime Research, Singapore&emsp;
23
+ <br>
24
+ <sup>†</sup>Project lead
25
+ </div>
26
+
27
+
28
+ <div>
29
+ <h4 align="center">
30
+ <a href="https://pq-yang.github.io/projects/MatAnyone2/" target='_blank'>
31
+ <img src="https://img.shields.io/badge/😈-Project%20Page-blue">
32
+ </a>
33
+ <a href="https://arxiv.org/abs/2512.11782" target='_blank'>
34
+ <img src="https://img.shields.io/badge/arXiv-2501.14677-b31b1b.svg">
35
+ </a>
36
+ <a href="https://www.youtube.com/watch?v=tyi8CNyjOhc&lc=Ugw1OS7z5QbW29RZCFZ4AaABAg" target='_blank'>
37
+ <img src="https://img.shields.io/badge/Demo%20Video-%23FF0000.svg?logo=YouTube&logoColor=white">
38
+ </a>
39
+ <a href="https://huggingface.co/spaces/PeiqingYang/MatAnyone" target='_blank'>
40
+ <img src="https://img.shields.io/badge/Demo-%F0%9F%A4%97%20Hugging%20Face-blue">
41
+ </a>
42
+ <a href="https://colab.research.google.com/drive/1NYW_CUDf7jnzxir7tOOlY7wRRalVOifD?usp=sharing" target='_blank'>
43
+ <img src="https://colab.research.google.com/assets/colab-badge.svg">
44
+ </a>
45
+ </h4>
46
+ </div>
47
+
48
+ <strong>MatAnyone 2 is a practical human video matting framework that preserves fine details by avoiding segmentation-like boundaries, while also shows enhanced robustness under challenging real-world conditions.</strong>
49
+
50
+ <div style="width: 100%; text-align: center; margin:auto;">
51
+ <img style="width:100%" src="https://github.com/pq-yang/MatAnyone2/blob/main/assets/teaser.jpg?raw=true">
52
+ </div>
53
+
54
+ 🎥 For more visual results, go checkout our <a href="https://pq-yang.github.io/projects/MatAnyone2/" target="_blank">project page</a>
55
+
56
+ </div>
57
+
58
+ ---
59
+
60
+ ## How to use
61
+ you can run the following commands to get started and start working with the model
62
+ ```
63
+ pip install -qqU git+https://github.com/pq-yang/MatAnyone2.git
64
+ ```
65
+
66
+ ```python
67
+ from matanyone2 import MatAnyone2, InferenceCore
68
+ model = MatAnyone2.from_pretrained("PeiqingYang/MatAnyone2")
69
+ processor = InferenceCore(model,device="cuda:0")
70
+ # inference
71
+ processor.process_video(input_path="inputs/video/test-sample2.mp4",
72
+ mask_path="inputs/mask/test-sample2.png",
73
+ output_path="results")
74
+ ```