How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("tqliu/Light-X", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Light-X

📄 Paper  |  🚀 Project Page  |  💻 GitHub

Introduction

This repository provides the pretrained Light-X weights supporting text-based and background-image–conditioned video relighting and controllable view synthesis.

Citation

If you find our work useful for your research, please consider citing our paper:

@article{liu2025light,
  title={Light-X: Generative 4D Video Rendering with Camera and Illumination Control},
  author={Liu, Tianqi and Chen, Zhaoxi and Huang, Zihao and Xu, Shaocong and Zhang, Saining and Ye, Chongjie and Li, Bohan and Cao, Zhiguo and Li, Wei and Zhao, Hao and others},
  journal={arXiv preprint arXiv:2512.05115},
  year={2025}
}
Downloads last month
235
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Paper for tqliu/Light-X