How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("chinmay0301/Vid2HDRImg", dtype=torch.bfloat16, device_map="cuda")

prompt = "Turn this cat into a dog"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(image=input_image, prompt=prompt).images[0]

Vid2HDRImg β€” pretrained weights

Pretrained weights for the Vid2HDRImg single-shot HDR reconstruction model:

  • unet/ β€” fine-tuned Stable Video Diffusion UNet (Stage 1)
  • fusion_net.pt β€” pixel-space fusion U-Net (Stage 2)

See the GitHub repo for installation and inference instructions.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for chinmay0301/Vid2HDRImg

Finetuned
(5)
this model