text
stringlengths
0
5.54k
precedent. Examples: Copied >>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) —
Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) —
Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values
that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) —
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) —
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or
None if safety checking could not be performed. Output class for Stable Diffusion pipelines.
Denoising Diffusion Implicit Models (DDIM)
Overview
Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
The abstract of the paper is the following:
Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.
The original codebase of this paper can be found here: ermongroup/ddim.
For questions, feel free to contact the author on tsong.me.
DDIMScheduler
class diffusers.DDIMScheduler
<
source
>
(
num_train_timesteps: int = 1000
beta_start: float = 0.0001
beta_end: float = 0.02
beta_schedule: str = 'linear'
trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None
clip_sample: bool = True
set_alpha_to_one: bool = True
steps_offset: int = 0
prediction_type: str = 'epsilon'
thresholding: bool = False
dynamic_thresholding_ratio: float = 0.995
clip_sample_range: float = 1.0
sample_max_value: float = 1.0
)
Parameters
num_train_timesteps (int) — number of diffusion steps used to train the model.
beta_start (float) — the starting beta value of inference.
beta_end (float) — the final beta value.
beta_schedule (str) —
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear, scaled_linear, or squaredcos_cap_v2.
trained_betas (np.ndarray, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc.
clip_sample (bool, default True) —
option to clip predicted sample for numerical stability.
clip_sample_range (float, default 1.0) —
the maximum magnitude for sample clipping. Valid only when clip_sample=True.
set_alpha_to_one (bool, default True) —
each diffusion step uses the value of alphas product at that step and at the previous one. For the final
step there is no previous alpha. When this option is True the previous alpha product is fixed to 1,
otherwise it uses the value of alpha at step 0.
steps_offset (int, default 0) —
an offset added to the inference steps. You can use a combination of offset=1 and
set_alpha_to_one=False, to make the last step use step 0 for the previous alpha product, as done in
stable diffusion.
prediction_type (str, default epsilon, optional) —
prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion
process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)