text stringlengths 0 5.54k |
|---|
image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel |
negative_prompt_embeds=negative_prompt_embeds, # generated from Compel |
image=init_image, |
).images[0] ControlNet ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet ge... |
# prepare image |
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" |
init_image = load_image(url) |
init_image = init_image.resize((958, 960)) # resize to depth image dimensions |
depth_image = load_image("https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png") |
make_image_grid([init_image, depth_image], rows=1, cols=2) Load a ControlNet model conditioned on depth maps and the AutoPipelineForImage2Image: Copied from diffusers import ControlNetModel, AutoPipelineForImage2Image |
import torch |
controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, variant="fp16", use_safetensors=True) |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() Now generate a new image conditioned on the depth map, initial image, and prompt: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[0] |
make_image_grid([init_image, depth_image, image_control_net], rows=1, cols=3) initial image depth image ControlNet image Let’s apply a new style to the image generated from the ControlNet by chaining it with an image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( |
"nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt |
negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" |
image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength=0.45, guidance_scale=10.5).images[0] |
make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows=2, cols=2) Optimize Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-e... |
+ pipeline.enable_xformers_memory_efficient_attention() With torch.compile, you can boost your inference speed even more by wrapping your UNet with it: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) To learn more, take a look at the Reduce memory usage and Torch 2.0 guides... |
Pipeline callbacks The denoising loop of a pipeline can be modified with custom defined functions using the callback_on_step_end parameter. The callback function is executed at the end of each step, and modifies the pipeline attributes and variables for the next step. This is really useful for dynamically adjusting cer... |
# adjust the batch_size of prompt_embeds according to guidance_scale |
if step_index == int(pipeline.num_timesteps * 0.4): |
prompt_embeds = callback_kwargs["prompt_embeds"] |
prompt_embeds = prompt_embeds.chunk(2)[-1] |
# update guidance_scale and prompt_embeds |
pipeline._guidance_scale = 0.0 |
callback_kwargs["prompt_embeds"] = prompt_embeds |
return callback_kwargs Now, you can pass the callback function to the callback_on_step_end parameter and the prompt_embeds to callback_on_step_end_tensor_inputs. Copied import torch |
from diffusers import StableDiffusionPipeline |
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) |
pipeline = pipeline.to("cuda") |
prompt = "a photo of an astronaut riding a horse on mars" |
generator = torch.Generator(device="cuda").manual_seed(1) |
out = pipeline( |
prompt, |
generator=generator, |
callback_on_step_end=callback_dynamic_cfg, |
callback_on_step_end_tensor_inputs=['prompt_embeds'] |
) |
out.images[0].save("out_custom_cfg.png") Interrupt the diffusion process The interruption callback is supported for text-to-image, image-to-image, and inpainting for the StableDiffusionPipeline and StableDiffusionXLPipeline. Stopping the diffusion process early is useful when building UIs that work with Diffusers beca... |
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") |
pipeline.enable_model_cpu_offload() |
num_inference_steps = 50 |
def interrupt_callback(pipeline, i, t, callback_kwargs): |
stop_idx = 10 |
if i == stop_idx: |
pipeline._interrupt = True |
return callback_kwargs |
pipeline( |
"A photo of a cat", |
num_inference_steps=num_inference_steps, |
callback_on_step_end=interrupt_callback, |
) Display image after each generation step This tip was contributed by asomoza. Display an image after each generation step by accessing and converting the latents after each step into an image. The latent space is compressed to 128x128, so the images are also 128x128 which is useful for a quick preview. Use the funct... |
weights = ( |
(60, -60, 25, -70), |
(60, -5, 15, -50), |
(60, 10, -5, -35) |
) |
weights_tensor = torch.t(torch.tensor(weights, dtype=latents.dtype).to(latents.device)) |
biases_tensor = torch.tensor((150, 140, 130), dtype=latents.dtype).to(latents.device) |
rgb_tensor = torch.einsum("...lxy,lr -> ...rxy", latents, weights_tensor) + biases_tensor.unsqueeze(-1).unsqueeze(-1) |
image_array = rgb_tensor.clamp(0, 255)[0].byte().cpu().numpy() |
image_array = image_array.transpose(1, 2, 0) |
return Image.fromarray(image_array) Create a function to decode and save the latents into an image. Copied def decode_tensors(pipe, step, timestep, callback_kwargs): |
latents = callback_kwargs["latents"] |
image = latents_to_rgb(latents) |
image.save(f"{step}.png") |
return callback_kwargs Pass the decode_tensors function to the callback_on_step_end parameter to decode the tensors after each step. You also need to specify what you want to modify in the callback_on_step_end_tensor_inputs parameter, which in this case are the latents. Copied from diffusers import AutoPipelineFo... |
import torch |
from PIL import Image |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.