Improving Diffusion Models for Virtual Try-on
Paper β’ 2403.05139 β’ Published β’ 6
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("yisol/IDM-VTON", dtype=torch.bfloat16, device_map="cuda")
prompt = "Turn this cat into a dog"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
image = pipe(image=input_image, prompt=prompt).images[0]This is an official implementation of paper 'Improving Diffusion Models for Authentic Virtual Try-on in the Wild'
π€ Try our huggingface Demo
For the demo, GPUs are supported from zerogpu, and auto masking generation codes are based on OOTDiffusion and DCI-VTON.
Parts of the code are based on IP-Adapter.
@article{choi2024improving,
title={Improving Diffusion Models for Virtual Try-on},
author={Choi, Yisol and Kwak, Sangkyung and Lee, Kyungmin and Choi, Hyungwon and Shin, Jinwoo},
journal={arXiv preprint arXiv:2403.05139},
year={2024}
}
The codes and checkpoints in this repository are under the CC BY-NC-SA 4.0 license.