learn2refocus / README.md
nielsr's picture
nielsr HF Staff
Add model card and metadata
734b9ce verified
|
raw
history blame
1.87 kB
---
pipeline_tag: image-to-video
library_name: diffusers
---
# Generating the Past, Present and Future from a Motion-Blurred Image
This repository contains the model weights for the paper [Generating the Past, Present and Future from a Motion-Blurred Image](https://huggingface.co/papers/2512.19817).
[**Project Page**](https://blur2vid.github.io) | [**GitHub Repository**](https://github.com/tedlasai/blur2vid) | [**Gradio Demo**](https://huggingface.co/spaces/tedlasai/blur2vid)
## Summary
What can a motion-blurred image reveal about a scene's past, present, and future? This work repurposes a pre-trained video diffusion model to recover videos revealing complex scene dynamics during the moment of capture and predicting what might have occurred immediately in the past or future. The approach is robust, generalizes to in-the-wild images, and supports downstream tasks such as recovering camera trajectories and object motion.
## Sample Usage
To run inference on your own images, please follow the setup instructions in the [official GitHub repository](https://github.com/tedlasai/blur2vid). You can run the model using the following command:
```bash
python inference.py --image_path assets/dummy_image.png --output_path output/
```
## Citation
If you use this model or code in your research, please cite:
```bibtex
@article{Tedla2025Blur2Vid,
title = {Generating the Past, Present, and Future from a Motion-Blurred Image},
author = {Tedla, SaiKiran and Zhu, Kelly and Canham, Trevor and Taubner, Felix and Brown, Michael and Kutulakos, Kiriakos and Lindell, David},
journal = {ACM Transactions on Graphics},
year = {2025},
note = {SIGGRAPH Asia.}
}
```
## Contact
For questions or issues, please reach out through the [project page](https://blur2vid.github.io) or contact [Sai Tedla](mailto:tedlasai@gmail.com).