Instructions to use chinmay0301/Vid2HDRImg with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use chinmay0301/Vid2HDRImg with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("chinmay0301/Vid2HDRImg", dtype=torch.bfloat16, device_map="cuda") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Notebooks
- Google Colab
- Kaggle
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("chinmay0301/Vid2HDRImg", dtype=torch.bfloat16, device_map="cuda")
prompt = "Turn this cat into a dog"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
image = pipe(image=input_image, prompt=prompt).images[0]Vid2HDRImg β pretrained weights
Pretrained weights for the Vid2HDRImg single-shot HDR reconstruction model:
unet/β fine-tuned Stable Video Diffusion UNet (Stage 1)fusion_net.ptβ pixel-space fusion U-Net (Stage 2)
See the GitHub repo for installation and inference instructions.
- Downloads last month
- -
Model tree for chinmay0301/Vid2HDRImg
Base model
stabilityai/stable-video-diffusion-img2vid