Datasets:

Modalities:
Video
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:

Improve dataset card: add task category and paper link

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +18 -271
README.md CHANGED
@@ -1,290 +1,37 @@
1
  ---
2
- base_model:
3
- - Wan-AI/Wan2.1-VACE-1.3B
4
  license: apache-2.0
5
- pipeline_tag: video-to-video
6
- library_name: diffusers
7
  ---
8
 
9
- <div align="center">
10
- <h1>
11
- SVOR (<b>S</b>table <b>V</b>ideo <b>O</b>bject <b>R</b>emoval)
12
- </h1>
13
- <p>
14
- Official PyTorch code for <em>From Ideal to Real: Stable Video Object Removal under Imperfect Conditions</em><br> </p>
15
- </p>
16
- <a href="https://arxiv.org/abs/2603.09283"><img src="https://img.shields.io/badge/arXiv-2603.09283-b31b1b" alt="version"></a>
17
- <a href="https://xiaomi-research.github.io/svor" target='_blank'>
18
- <img src="https://img.shields.io/badge/🐳-Project%20Page-blue">
19
- </a>
20
- <a href='https://github.com/xiaomi-research/svor/'>
21
- <img src='https://img.shields.io/badge/github-code-blue?logo=github'>
22
- </a>
23
- <a href='https://huggingface.co/datasets/HigherHu/RORD-50'>
24
- <img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-RORD--50-orange'>
25
- </a>
26
- <!-- <a href="https://huggingface.co/spaces/xiaomi/SVOR" target='_blank'>
27
- <img src="https://img.shields.io/badge/Demo-%F0%9F%A4%97%20Hugging%20Face-blue">
28
- </a> -->
29
- <a href="https://www.apache.org/licenses/LICENSE-2.0"><img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="mit"></a>
30
 
 
31
 
32
- If SVOR is helpful to your projects, please help star this repo. Thanks! 🤗
33
 
34
- </div>
 
35
 
36
  ## Overview
37
-
38
- ![overall_structure](asset/framework.png)
39
-
40
- Removing objects from videos remains difficult in the presence of real-world imperfections such as shadows, abrupt motion, and defective masks. Existing diffusion-based video inpainting models often struggle to maintain temporal stability and visual consistency under these challenges. We propose **Stable Video Object Removal (SVOR)**, a robust framework that achieves shadow-free, flicker-free, and mask-defect-tolerant removal through three key designs: (1) **Mask Union for Stable Erasure (MUSE)**, a windowed union strategy applied during temporal mask downsampling to preserve all target regions observed within each window, effectively handling abrupt motion and reducing missed removals; (2) **Denoising-Aware Segmentation (DA-Seg)**, a lightweight segmentation head on a decoupled side branch equipped with {Denoising-Aware AdaLN } and trained with mask degradation to provide an internal diffusion-aware localization prior without affecting content generation; and (3) **Curriculum Two-Stage Training**: where Stage I performs self-supervised pretraining on unpaired real-background videos with online random masks to learn realistic background and temporal priors, and Stage II refines on synthetic pairs using mask degradation and side-effect-weighted losses, jointly removing objects and their associated shadows/reflections while improving cross-domain robustness. Extensive experiments show that SVOR attains new state-of-the-art results across multiple datasets and degraded-mask benchmarks, advancing video object removal from ideal settings toward real-world applications.
41
-
42
- ## Results
43
-
44
- For more visual results, go checkout our <a href="https://xiaomi-research.github.io/svor/" target="_blank">project page</a>
45
-
46
- <h3>Common Masks</h3>
47
- <table>
48
- <thead>
49
- <tr>
50
- <th>Masked Input</th>
51
- <th>Result</th>
52
- </tr>
53
- </thead>
54
- <tbody>
55
- <tr>
56
- <td>
57
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/input/bmx-bumps.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
58
- <img src="asset/examples/input/bmx-bumps.gif" width="100%">
59
- </td>
60
- <td>
61
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/result/bmx-bumps.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
62
- <img src="asset/examples/result/bmx-bumps.gif" width="100%">
63
- </td>
64
- </tr>
65
- <tr>
66
- <td>
67
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/input/boat.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
68
- <img src="asset/examples/input/boat.gif" width="100%">
69
- </td>
70
- <td>
71
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/result/boat.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
72
- <img src="asset/examples/result/boat.gif" width="100%">
73
- </td>
74
- </tr>
75
- <tr>
76
- <td>
77
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/input/bus.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
78
- <img src="asset/examples/input/bus.gif" width="100%">
79
- </td>
80
- <td>
81
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/result/bus.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
82
- <img src="asset/examples/result/bus.gif" width="100%">
83
- </td>
84
- </tr>
85
- <tr>
86
- <td>
87
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/input/varanus-cage.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
88
- <img src="asset/examples/input/varanus-cage.gif" width="100%">
89
- </td>
90
- <td>
91
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/result/varanus-cage.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
92
- <img src="asset/examples/result/varanus-cage.gif" width="100%">
93
- </td>
94
- </tr>
95
- <tr>
96
- </tbody>
97
- </table>
98
-
99
- <h3>Defective Masks</h3>
100
- <table>
101
- <thead>
102
- <tr>
103
- <th>Masked Input</th>
104
- <th>Result</th>
105
- </tr>
106
- </thead>
107
- <tbody>
108
- <tr>
109
- <td>
110
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/input_maskdrop0.5/camel.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
111
- <img src="asset/examples/input_maskdrop0.5/camel.gif" width="100%">
112
- </td>
113
- <td>
114
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/result_maskdrop0.5/camel.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
115
- <img src="asset/examples/result_maskdrop0.5/camel.gif" width="100%">
116
- </td>
117
- </tr>
118
- <tr>
119
- <td>
120
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/input_maskdrop0.5/dog-gooses.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
121
- <img src="asset/examples/input_maskdrop0.5/dog-gooses.gif" width="100%">
122
- </td>
123
- <td>
124
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/result_maskdrop0.5/dog-gooses.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
125
- <img src="asset/examples/result_maskdrop0.5/dog-gooses.gif" width="100%">
126
- </td>
127
- </tr>
128
- <tr>
129
- <td>
130
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/input_maskdrop0.5/elephant.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
131
- <img src="asset/examples/input_maskdrop0.5/elephant.gif" width="100%">
132
- </td>
133
- <td>
134
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/result_maskdrop0.5/elephant.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
135
- <img src="asset/examples/result_maskdrop0.5/elephant.gif" width="100%">
136
- </td>
137
- </tr>
138
- <tr>
139
- <td>
140
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/input_maskdrop0.5/kite-walk.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
141
- <img src="asset/examples/input_maskdrop0.5/kite-walk.gif" width="100%">
142
- </td>
143
- <td>
144
- <!-- <video width="480" autoplay loop muted playsinline controls> <source src="asset/examples/result_maskdrop0.5/kite-walk.mp4" type="video/mp4"> Your browser does not support the video tag. </video> -->
145
- <img src="asset/examples/result_maskdrop0.5/kite-walk.gif" width="100%">
146
- </td>
147
- </tr>
148
- <tr>
149
- </tbody>
150
- </table>
151
-
152
- ## Dependencies and Installation
153
-
154
- The code is tested with Python 3.10
155
-
156
- 1. Clone Repo
157
-
158
- ```bash
159
- git clone https://github.com/xiaomi-research/SVOR.git
160
- ```
161
- 2. Create Conda Environment and Install Dependencies
162
-
163
- ```bash
164
- # create new anaconda env
165
- conda create -n svor python=3.10 -y
166
- conda activate svor
167
-
168
- # install pytorch and xformers
169
- pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 xformers==0.0.30
170
-
171
- # install other python dependencies
172
- pip install -r requirements.txt
173
- ```
174
- 3. [Optional] Install flash-attn, refer to [flash-attention](https://github.com/Dao-AILab/flash-attention)
175
-
176
- ```bash
177
- pip install packaging ninja psutil
178
- pip install flash-attn==2.7.4.post1 --no-build-isolation
179
- ```
180
-
181
- ### [Optional] Run with docker
182
-
183
- ```bash
184
- docker build -f Dockerfile.ds -t SVOR:latest .
185
- docker run --gpus all -it --rm -v /path/to/videos:/data -v /path/to/models:/root/models SVOR:latest
186
- ```
187
-
188
- ## Pretrained Weights
189
-
190
- Download pretrained weights and put them to `models/`:
191
-
192
- - download [Wan-AI/Wan2.1-VACE-1.3B](https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B)
193
- - download our trained two loras from [HigherHu/SVOR](https://huggingface.co/HigherHu/SVOR)
194
-
195
- The files in `models/` are as follows:
196
-
197
- ```
198
-
199
- models/
200
- ├── put models here.txt
201
- ├── remove_model_stage1.safetensors
202
- ├── remove_model_stage2.safetensors
203
- └── Wan2.1-VACE-1.3B/
204
-
205
- ```
206
-
207
- ## Quick test
208
-
209
- Run the following scripts, and results will be save to `samples/SVOR/`:
210
-
211
- ```python
212
- python predict_SVOR.py \
213
- --input_video samples/input/bmx-bumps_raw.mp4 \
214
- --input_mask_video samples/input/bmx-bumps_mask.mp4
215
- ```
216
-
217
- ```
218
- Usage:
219
-
220
- python predict_SVOR.py [options]
221
-
222
- Some key options:
223
- --input_video Path to input video
224
- --input_mask_video Path to mask video
225
- --num_inference_steps Inference steps (default: 20)
226
- --save_dir Output directory
227
- --sample_size Frame size: height width (default: 720 1280)
228
- ```
229
-
230
- ATTENTION: It will need no less than **40GB** GPU memory to run the inference.
231
-
232
- ## Interactive Demo
233
-
234
- 1. Install [SAM2](https://github.com/facebookresearch/sam2) and download pretrained weights [sam2.1_hiera_large.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt) to `models/`
235
-
236
- 2. Start the gradio demo
237
-
238
- ```bash
239
- python -m demo.gradio_app
240
- ```
241
-
242
- Ensure it print the following informations:
243
- ```
244
- ...
245
- [Info] SAM2 Predictor initialized successfully
246
- ...
247
- [Info] Removal model Predictor initialized successfully
248
- Running on local URL: http://0.0.0.0:7861
249
-
250
- ```
251
-
252
- 3. Open the web page: http://[ServerIP]:7861
253
-
254
- ```
255
- Usage
256
- 1. Upload a video and click "Process video" button in the "1. Upload and Preprocess" tab page
257
- 2. Switch to "2. Annotate and Propagate" tab page, click to segment the objects
258
- 3. "Add annotation" and "Propagate masks", to finish the segmentation
259
- 4. Check the object ID in "Display object list", and switch to "3. Remove Objects" tab page
260
- 5. Click "Preview video" to preview input video and mask video
261
- 6. Click "Start removal" to run the SVOR algorithm
262
- ```
263
-
264
- ## RORD-50 Dataset
265
-
266
- The RORD-50 Dataset can be downloaded from [TBD](TBD)
267
-
268
-
269
- ## Acknowledgement
270
-
271
- Our work benefit from the following open-source projects:
272
-
273
- - [VideoX-Fun](https://github.com/aigc-apps/VideoX-Fun)
274
- - [VACE](https://github.com/ali-vilab/VACE)
275
- - [ROSE](https://github.com/Kunbyte-AI/ROSE)
276
- - [SAM2 - Segment Anything Model 2](https://github.com/facebookresearch/sam2)
277
- - [RORD](https://github.com/Forty-lock/RORD)
278
 
279
  ## Citation
280
-
281
- If you find our repo useful for your research, please consider citing our paper:
282
 
283
  ```bibtex
284
  @article{hu2026svor,
285
  title={From Ideal to Real: Stable Video Object Removal under Imperfect Conditions},
286
- author={Hu, Jiagao and Chen, Yuxuan and Li, Fuhao and Wang, Zepeng and Wang, Fei and Zhou, Daiguo and Luan, Jian},
287
  journal={arXiv preprint arXiv:2603.09283},
288
  year={2026}
289
  }
290
  ```
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  license: apache-2.0
3
+ task_categories:
4
+ - image-to-image
5
  ---
6
 
7
+ # RORD-50 Dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
+ This repository contains the **RORD-50** dataset, introduced in the paper [From Ideal to Real: Stable Video Object Removal under Imperfect Conditions](https://huggingface.co/papers/2603.09283).
10
 
11
+ [**Project Page**](https://xiaomi-research.github.io/svor/) | [**GitHub**](https://github.com/xiaomi-research/svor) | [**Paper**](https://huggingface.co/papers/2603.09283)
12
 
13
+ ## Introduction
14
+ The RORD-50 dataset is a benchmark designed to evaluate video object removal performance under real-world challenges, such as shadows, abrupt motion, and defective masks. It was introduced as part of the **Stable Video Object Removal (SVOR)** framework, which focuses on achieving shadow-free, flicker-free, and mask-defect-tolerant removal.
15
 
16
  ## Overview
17
+ Removing objects from videos remains difficult in the presence of real-world imperfections. SVOR advances video object removal from ideal settings toward real-world applications by handling abrupt motion and mask defects effectively. This dataset provides the necessary benchmarks for testing the robustness and temporal stability of video inpainting models.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ## Citation
20
+ If you find this dataset or the SVOR framework useful for your research, please consider citing the paper:
 
21
 
22
  ```bibtex
23
  @article{hu2026svor,
24
  title={From Ideal to Real: Stable Video Object Removal under Imperfect Conditions},
25
+ author={Hu, Jiagao and Chen, Yuxuan and Li, Fuhao and Wang, Zepeng and Wang, Fei and Daiguo, Zhou and Luan, Jian},
26
  journal={arXiv preprint arXiv:2603.09283},
27
  year={2026}
28
  }
29
  ```
30
+
31
+ ## Acknowledgement
32
+ This work benefits from the following open-source projects:
33
+ - [VideoX-Fun](https://github.com/aigc-apps/VideoX-Fun)
34
+ - [VACE](https://github.com/ali-vilab/VACE)
35
+ - [ROSE](https://github.com/Kunbyte-AI/ROSE)
36
+ - [SAM2 - Segment Anything Model 2](https://github.com/facebookresearch/sam2)
37
+ - [RORD](https://github.com/Forty-lock/RORD)