text
stringlengths
0
5.54k
Please refer to the mirror site for more information.
Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be
defined in
cross_attention.py
and be a torch.nn.Module class.
This function is experimental and might change in the future
It is required to be logged in (huggingface-cli login) when you want to use private or gated
models.
Activate the special β€œoffline-mode” to use
this method in a firewalled environment.
save_attn_procs
<
source
>
(
save_directory: typing.Union[str, os.PathLike]
is_main_process: bool = True
weights_name: str = 'pytorch_lora_weights.bin'
save_function: typing.Callable = None
)
Parameters
save_directory (str or os.PathLike) β€”
Directory to which to save. Will be created if it doesn’t exist.
is_main_process (bool, optional, defaults to True) β€”
Whether the process calling this is the main process or not. Useful when in distributed training like
TPUs and need to call this function on all processes. In this case, set is_main_process=True only on
the main process to avoid race conditions.
save_function (Callable) β€”
The function to use to save the state dictionary. Useful on distributed training like TPUs when one
need to replace torch.save by another method. Can be configured with the environment variable
DIFFUSERS_SAVE_MODE.
Save an attention procesor to a directory, so that it can be re-loaded using the
[load_attn_procs()](/docs/diffusers/v0.12.0/en/api/loaders#diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs) method.
Prior Transformer The Prior Transformer was originally introduced in Hierarchical Text-Conditional Image Generation with CLIP Latents by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. PriorTransformer class diffusers.PriorTransformer < source > ( num_attention_heads: int = 32 attention_head_dim: int = 64 num_layers: int = 20 embedding_dim: int = 768 num_embeddings = 77 additional_embeddings = 4 dropout: float = 0.0 time_embed_act_fn: str = 'silu' norm_in_type: Optional = None embedding_proj_norm_type: Optional = None encoder_hid_proj_type: Optional = 'linear' added_emb_type: Optional = 'prd' time_embed_dim: Optional = None embedding_proj_dim: Optional = None clip_embed_dim: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 32) β€” The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 64) β€” The number of channels in each head. num_layers (int, optional, defaults to 20) β€” The number of layers of Transformer blocks to use. embedding_dim (int, optional, defaults to 768) β€” The dimension of the model input hidden_states num_embeddings (int, optional, defaults to 77) β€”
The number of embeddings of the model input hidden_states additional_embeddings (int, optional, defaults to 4) β€” The number of additional tokens appended to the
projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. dropout (float, optional, defaults to 0.0) β€” The dropout probability to use. time_embed_act_fn (str, optional, defaults to β€˜silu’) β€”
The activation function to use to create timestep embeddings. norm_in_type (str, optional, defaults to None) β€” The normalization layer to apply on hidden states before
passing to Transformer blocks. Set it to None if normalization is not needed. embedding_proj_norm_type (str, optional, defaults to None) β€”
The normalization layer to apply on the input proj_embedding. Set it to None if normalization is not
needed. encoder_hid_proj_type (str, optional, defaults to linear) β€”
The projection layer to apply on the input encoder_hidden_states. Set it to None if
encoder_hidden_states is None. added_emb_type (str, optional, defaults to prd) β€” Additional embeddings to condition the model.
Choose from prd or None. if choose prd, it will prepend a token indicating the (quantized) dot
product between the text embedding and image embedding as proposed in the unclip paper
https://arxiv.org/abs/2204.06125 If it is None, no additional embeddings will be prepended. time_embed_dim (int, *optional*, defaults to None) -- The dimension of timestep embeddings. If None, will be set to num_attention_heads * attention_head_dim` embedding_proj_dim (int, optional, default to None) β€”
The dimension of proj_embedding. If None, will be set to embedding_dim. clip_embed_dim (int, optional, default to None) β€”
The dimension of the output. If None, will be set to embedding_dim. A Prior Transformer model. forward < source > ( hidden_states timestep: Union proj_embedding: FloatTensor encoder_hidden_states: Optional = None attention_mask: Optional = None return_dict: bool = True ) β†’ ~models.prior_transformer.PriorTransformerOutput or tuple Parameters hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) β€”
The currently predicted image embeddings. timestep (torch.LongTensor) β€”
Current denoising step. proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) β€”
Projected embedding vector the denoising process is conditioned on. encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) β€”
Hidden states of the text embeddings the denoising process is conditioned on. attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) β€”
Text mask for the text embeddings. return_dict (bool, optional, defaults to True) β€”
Whether or not to return a ~models.prior_transformer.PriorTransformerOutput instead of a plain
tuple. Returns
~models.prior_transformer.PriorTransformerOutput or tuple
If return_dict is True, a ~models.prior_transformer.PriorTransformerOutput is returned, otherwise a
tuple is returned where the first element is the sample tensor.
The PriorTransformer forward method. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) β€”
The instantiated processor class or a dictionary of processor classes that will be set as the processor
for all Attention layers.
If processor is a dict, the key needs to define the path to the corresponding cross attention
processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. PriorTransformerOutput class diffusers.models.transformers.prior_transformer.PriorTransformerOutput < source > ( predicted_image_embedding: FloatTensor ) Parameters predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) β€”
The predicted CLIP image embedding conditioned on the CLIP text embedding input. The output of PriorTransformer.
DDIM Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10Γ— to 50Γ— faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase can be found at ermongroup/ddim. DDIMPipeline class diffusers.DDIMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) β€”
A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
A scheduler to be used in combination with unet to denoise the encoded image. Can be one of
DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None eta: float = 0.0 num_inference_steps: int = 50 use_clipped_model_output: Optional = None output_type: Optional = 'pil' return_dict: bool = True ) β†’ ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) β€”
The number of images to generate. generator (torch.Generator, optional) β€”
A torch.Generator to make
generation deterministic. eta (float, optional, defaults to 0.0) β€”
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
to the DDIMScheduler, and is ignored in other schedulers. A value of 0 corresponds to
DDIM and 1 corresponds to DDPM. num_inference_steps (int, optional, defaults to 50) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. use_clipped_model_output (bool, optional, defaults to None) β€”
If True or False, see documentation for DDIMScheduler.step(). If None, nothing is passed
downstream to the scheduler (use None for schedulers which don’t support this argument). output_type (str, optional, defaults to "pil") β€”
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns
ImagePipelineOutput or tuple
If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is