Lumen / README.md
qd6pwu4's picture
Improve model card metadata and content (#1)
988e9dd verified
metadata
base_model:
  - alibaba-pai/Wan2.1-Fun-1.3B-Control
  - alibaba-pai/Wan2.1-Fun-14B-Control
language:
  - en
  - zh
license: apache-2.0
pipeline_tag: image-to-video

๐Ÿ’กLumen: Consistent Video Relighting and Harmonious Background Replacement with Video Generative Models

Project arXiv GitHub HuggingFace HuggingFace

๐Ÿ’กLumen is an end-to-end video relighting framework developed on large-scale video generative models. It can relight the foreground and replace the background of a video based on flexible textual descriptions for instructing the control of lighting and background.

Introduction

Video relighting aims to replace the background in videos while correspondingly adjusting the lighting in the foreground with harmonious blending. Lumen preserves the original properties of the foreground (e.g., albedo) and propagates consistent relighting across temporal frames. It is trained on a large-scale dataset featuring a mixture of realistic and synthetic videos, utilizing a domain-aware adapter to decouple the learning of relighting and domain appearance distribution.

Authors

Jianshu Zeng, Yuxuan Liu, Yutong Feng, Chenxuan Miao, Zixiang Gao, Jiwang Qu, Jianzhang Zhang, Bin Wang, Kun Yuan.

๐Ÿš€ Quick Start

This repository contains the weights of Lumen. For more instructions about how to use the model, please refer to the official GitHub repository.

Environment Setup

conda create -n lumen python=3.10 -y
conda activate lumen
pip install torch==2.4.0 torchvision==0.19.0 --index-url https://download.pytorch.org/whl/cu124
pip install -r requirements.txt

Inference

After downloading the model weights and necessary base models (Wan2.1-Fun), you can run inference or the Gradio app:

# Run text-to-video inference
python infer_t2v.py

# Launch the Gradio demo
python app_lumen.py

๐Ÿ“‹ Citation

If you find our work helpful, please consider citing:

@article{zeng2025lumen,
    title={Lumen: Consistent Video Relighting and Harmonious Background Replacement with Video Generative Models},
    author={Zeng, Jianshu and Liu, Yuxuan and Feng, Yutong and Miao, Chenxuan and Gao, Zixiang and Qu, Jiwang and Zhang, Jianzhang and Wang, Bin and Yuan, Kun},
    journal={arXiv preprint arXiv:2508.12945},
    year={2025},
    url={https://arxiv.org/abs/2508.12945}, 
}

Acknowledgements

We would like to thank the contributors to DiffSynth-Studio, VideoX-Fun, and the Wan2.1 team for their open research and exploration.