text stringlengths 0 5.54k |
|---|
sample (torch.FloatTensor) — |
current instance of sample being created by diffusion process. |
generator (torch.Generator, optional) — Random number generator. |
return_dict (bool) — option for returning tuple rather than EulerAncestralDiscreteSchedulerOutput class |
Returns |
~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput or tuple |
~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput if return_dict is True, otherwise |
a tuple. When returning a tuple, the first element is the sample tensor. |
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion |
process from the learned model outputs (most often the predicted noise). |
Adapt a model to a new task Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained UNet2DCo... |
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) |
pipeline.unet.config["in_channels"] |
4 Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like runwayml/stable-diffusion-inpainting: Copied from diffusers import StableDiffusionPipeline |
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True) |
pipeline.unet.config["in_channels"] |
9 To adapt your text-to-image model for inpainting, you’ll need to change the number of in_channels from 4 to 9. Initialize a UNet2DConditionModel with the pretrained text-to-image model weights, and change in_channels to 9. Changing the number of in_channels means you need to set ignore_mismatched_sizes=True and low_c... |
model_id = "runwayml/stable-diffusion-v1-5" |
unet = UNet2DConditionModel.from_pretrained( |
model_id, |
subfolder="unet", |
in_channels=9, |
low_cpu_mem_usage=False, |
ignore_mismatched_sizes=True, |
use_safetensors=True, |
) The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (conv_in.weight) of the unet are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise. |
Text-guided image-inpainting |
The StableDiffusionInpaintPipeline allows you to edit specific parts of an image by providing a mask and a text prompt. It uses a version of Stable Diffusion, like runwayml/stable-diffusion-inpainting specifically trained for inpainting tasks. |
Get started by loading an instance of the StableDiffusionInpaintPipeline: |
Copied |
import PIL |
import requests |
import torch |
from io import BytesIO |
from diffusers import StableDiffusionInpaintPipeline |
pipeline = StableDiffusionInpaintPipeline.from_pretrained( |
"runwayml/stable-diffusion-inpainting", |
torch_dtype=torch.float16, |
) |
pipeline = pipeline.to("cuda") |
Download an image and a mask of a dog which you’ll eventually replace: |
Copied |
def download_image(url): |
response = requests.get(url) |
return PIL.Image.open(BytesIO(response.content)).convert("RGB") |
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" |
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" |
init_image = download_image(img_url).resize((512, 512)) |
mask_image = download_image(mask_url).resize((512, 512)) |
Now you can create a prompt to replace the mask with something else: |
Copied |
prompt = "Face of a yellow cat, high resolution, sitting on a park bench" |
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] |
image |
mask_image |
prompt |
output |
Face of a yellow cat, high resolution, sitting on a park bench |
A previous experimental implementation of inpainting used a different, lower-quality process. To ensure backwards compatibility, loading a pretrained pipeline that doesn’t contain the new model will still apply the old inpainting method. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.