| --- |
| library_name: diffusers |
| license: cc-by-nc-4.0 |
| pipeline_tag: image-to-image |
| tags: |
| - video-to-video |
| - video-object-removal |
| - video-inpainting |
| - cvpr |
| --- |
| |
| <div align="center"> |
| <h1>EffectErase: Joint Video Object Removal<br />and Insertion for High-Quality Effect Erasing</h1> |
| <p><strong>CVPR 2026</strong></p> |
| <p> |
| <a href="https://www.yangfu.site/" target="_blank" rel="noreferrer">Yang Fu</a> |
| · |
| <a href="https://henghuiding.com/group/" target="_blank" rel="noreferrer">Yike Zheng</a> |
| · |
| <a href="https://github.com/oliviadzy" target="_blank" rel="noreferrer">Ziyun Dai</a> |
| · |
| <a href="https://henghuiding.com/" target="_blank" rel="noreferrer">Henghui Ding</a><span>†</span> |
| </p> |
| <p> |
| Institute of Big Data, College of Computer Science and Artificial Intelligence, Fudan University, China |
| <br /> |
| <span>† Corresponding author</span> |
| </p> |
| <p> |
| <a href="https://henghuiding.com/EffectErase/" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/🐳-Project%20Page-blue" alt="Project Page" /></a> |
| <a href="https://huggingface.co/papers/2603.19224" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Paper-CVPR%202026-green" alt="Paper" /></a> |
| <a href="https://github.com/FudanCVL/EffectErase" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/GitHub-FudanCVL%2FEffectErase-181717?logo=github" alt="GitHub" /></a> |
| <a href="https://huggingface.co/papers/2603.19224" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/arXiv-2603.19224-red" alt="arXiv" /></a> |
| <a href="https://huggingface.co/datasets/FudanCVL/EffectErase" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-Hugging%20Face-yellow" alt="Dataset" /></a> |
| </p> |
| </div> |
| |
| This repository provides the checkpoint `EffectErase.ckpt` for **EffectErase**, as presented in the paper [EffectErase: Joint Video Object Removal and Insertion for High-Quality Effect Erasing](https://huggingface.co/papers/2603.19224). |
|
|
| <img src="assets/teaser.gif" alt="teaser" /> |
|
|
| ## Abstract |
|
|
| Video object removal aims to eliminate dynamic target objects and their visual effects, such as deformation, shadows, and reflections, while restoring seamless backgrounds. Current methods often struggle to erase these effects and synthesize coherent backgrounds. To address this, we introduce **VOR** (**V**ideo **O**bject **R**emoval), a large-scale dataset of 60K high-quality video pairs covering various object effects. Building on VOR, we propose ***EffectErase***, an effect-aware video object removal method that treats video object insertion as a reciprocal learning task. The model includes task-aware region guidance and an insertion-removal consistency objective to ensure high-quality video object effect erasing across diverse scenarios. |
|
|
| ## Quick Start |
|
|
| 1. Setup repository and environment |
|
|
| ```bash |
| git clone https://github.com/FudanCVL/EffectErase.git |
| cd EffectErase |
| pip install -e . |
| ``` |
|
|
| 2. Download weights |
|
|
| ```bash |
| hf download alibaba-pai/Wan2.1-Fun-1.3B-InP --local-dir Wan-AI/Wan2.1-Fun-1.3B-InP |
| hf download FudanCVL/EffectErase EffectErase.ckpt --local-dir ./ |
| ``` |
|
|
| 3. Run the script |
|
|
| ```bash |
| bash script/test_remove.sh |
| ``` |
|
|
| You can edit `script/test_remove.sh` and change these paths to use your own data: |
| - `--fg_bg_path`: Path to the input video. |
| - `--mask_path`: Path to the mask video (e.g., generated by SAM2.1). |
| - `--output_path`: Path for the saved results. |
|
|
| ## BibTeX |
|
|
| ```bibtex |
| @inproceedings{fu2026effecterase, |
| title={EffectErase: Joint Video Object Removal and Insertion for High-Quality Effect Erasing}, |
| author={Fu, Yang and Zheng, Yike and Dai, Ziyun and Ding, Henghui}, |
| booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, |
| year={2026} |
| } |
| ``` |
|
|
| ## Contact |
|
|
| If you have any questions, please feel free to reach out at aleeyanger@gmail.com. |