text
stringlengths
0
5.54k
>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is
computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) —
Override the default None operator for use as op argument to the
memory_efficient_attention()
function of xFormers. Enable memory efficient attention from xFormers. When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes
precedent. Examples: Copied >>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: floa...
Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) —
Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.114...
that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Imag...
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) —
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or
None if safety checking could not be performed. Output class for Stable Diffusion pipelines.
Unconditional Latent Diffusion
Overview
Unconditional Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.
The abstract of the paper is the following:
By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. Howev...
The original codebase can be found here.
Tips:
Available Pipelines:
Pipeline
Tasks
Colab
pipeline_latent_diffusion_uncond.py
Unconditional Image Generation
-
Examples:
LDMPipeline
class diffusers.LDMPipeline
<
source
>
(
vqvae: VQModel
unet: UNet2DModel
scheduler: DDIMScheduler
)
Parameters
vqvae (VQModel) —
Vector-quantized (VQ) Model to encode and decode images to and from latent representations.
unet (UNet2DModel) — U-Net architecture to denoise the encoded image latents.
scheduler (SchedulerMixin) —
DDIMScheduler is to be used in combination with unet to denoise the encoded image latents.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
__call__
<
source
>
(
batch_size: int = 1
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
eta: float = 0.0
num_inference_steps: int = 50
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
**kwargs
)