text stringlengths 0 5.54k |
|---|
image Stable Diffusion v1.5 Stable Diffusion XL Kandinsky 2.2 ControlNet (pose conditioning) Configure pipeline parameters There are a number of parameters that can be configured in the pipeline that affect how an image is generated. You can change the image’s output size, specify a negative prompt to improve imag... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" |
).to("cuda") |
image = pipeline( |
"Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", height=768, width=512 |
).images[0] |
image Other models may have different default image sizes depending on the image sizes in the training dataset. For example, SDXL’s default image size is 1024x1024 and using lower height and width values may result in lower quality images. Make sure you check the model’s API reference first! Guidance scale The guidan... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 |
).to("cuda") |
image = pipeline( |
"Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", guidance_scale=3.5 |
).images[0] |
image guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 10.5 Negative prompt Just like how a prompt guides generation, a negative prompt steers the model away from things you don’t want the model to generate. This is commonly used to improve overall image quality by removing poor or bad image features such... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 |
).to("cuda") |
image = pipeline( |
prompt="Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", |
negative_prompt="ugly, deformed, disfigured, poor details, bad anatomy", |
).images[0] |
image negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "astronaut" Generator A torch.Generator object enables reproducibility in a pipeline by setting a manual seed. You can use a Generator to generate batches of images and iteratively improve on an image generated from a s... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 |
).to("cuda") |
generator = torch.Generator(device="cuda").manual_seed(30) |
image = pipeline( |
"Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", |
generator=generator, |
).images[0] |
image Control image generation There are several ways to exert more control over how an image is generated outside of configuring a pipeline’s parameters, such as prompt weighting and ControlNet models. Prompt weighting Prompt weighting is a technique for increasing or decreasing the importance of concepts in a promp... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 |
).to("cuda") |
image = pipeline( |
prompt_embeds=prompt_embeds, # generated from Compel |
negative_prompt_embeds=negative_prompt_embeds, # generated from Compel |
).images[0] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For exampl... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda") |
pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. |
Unconditional Image Generation |
The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference |
Start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. |
You can use the DiffusionPipeline for any Diffusers’ checkpoint. |
In this guide though, you’ll use DiffusionPipeline for unconditional image generation with DDPM: |
Copied |
>>> from diffusers import DiffusionPipeline |
>>> generator = DiffusionPipeline.from_pretrained("google/ddpm-celebahq-256") |
The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. |
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU. |
You can move the generator object to GPU, just like you would in PyTorch. |
Copied |
>>> generator.to("cuda") |
Now you can use the generator on your text prompt: |
Copied |
>>> image = generator().images[0] |
The output is by default wrapped into a PIL Image object. |
You can save the image by simply calling: |
Copied |
>>> image.save("generated_image.png") |
UNetMotionModel The UNet model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual ... |
sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented |
for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — |
The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually |
over each tensor of dim=dim. dim (int, optional, defaults to 0) — |
The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) |
or dim=1 (sequence length). Sets the attention processor to use feed forward |
chunking. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — |
Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to |
mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — |
Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to |
mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.... |
are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Opti... |
The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — |
The encoder hidden states with shape (batch, sequence_length, feature_dim). |
timestep_cond — (torch.Tensor, optional, defaults to None): |
Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.