learn2refocus / README.md
nielsr's picture
nielsr HF Staff
Add model card and metadata
734b9ce verified
|
raw
history blame
1.87 kB
metadata
pipeline_tag: image-to-video
library_name: diffusers

Generating the Past, Present and Future from a Motion-Blurred Image

This repository contains the model weights for the paper Generating the Past, Present and Future from a Motion-Blurred Image.

Project Page | GitHub Repository | Gradio Demo

Summary

What can a motion-blurred image reveal about a scene's past, present, and future? This work repurposes a pre-trained video diffusion model to recover videos revealing complex scene dynamics during the moment of capture and predicting what might have occurred immediately in the past or future. The approach is robust, generalizes to in-the-wild images, and supports downstream tasks such as recovering camera trajectories and object motion.

Sample Usage

To run inference on your own images, please follow the setup instructions in the official GitHub repository. You can run the model using the following command:

python inference.py --image_path assets/dummy_image.png --output_path output/

Citation

If you use this model or code in your research, please cite:

@article{Tedla2025Blur2Vid,
  title        = {Generating the Past, Present, and Future from a Motion-Blurred Image},
  author       = {Tedla, SaiKiran and Zhu, Kelly and Canham, Trevor and Taubner, Felix and Brown, Michael and Kutulakos, Kiriakos and Lindell, David},
  journal      = {ACM Transactions on Graphics},
  year         = {2025},
  note         = {SIGGRAPH Asia.}
}

Contact

For questions or issues, please reach out through the project page or contact Sai Tedla.