Learning Temporally Consistent Video Depth from Video Diffusion Priors
Paper β’ 2406.01493 β’ Published β’ 23
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("jhshao/ChronoDepth", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]This model represents the official checkpoint of the paper titled "Learning Temporally Consistent Video Depth from Video Diffusion Priors".
Jiahao Shao*, Yuanbo Yang*, Hongyu Zhou, Youmin Zhang, Yujun Shen, Matteo Poggi, Yiyi Liaoβ
Please cite our paper if you find this repository useful:
@misc{shao2024learning,
title={Learning Temporally Consistent Video Depth from Video Diffusion Priors},
author={Jiahao Shao and Yuanbo Yang and Hongyu Zhou and Youmin Zhang and Yujun Shen and Matteo Poggi and Yiyi Liao},
year={2024},
eprint={2406.01493},
archivePrefix={arXiv},
primaryClass={cs.CV}
}