text
stringlengths
0
5.54k
The frequency at which the callback function is called. If not specified, the callback is called at
every step. cross_attention_kwargs (dict, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
self.processor. clip_skip (int, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Returns
StableDiffusionPipelineOutput or tuple
If return_dict is True, StableDiffusionPipelineOutput is returned,
otherwise a tuple is returned where the first element is a list with the generated images and the
second element is a list of bools indicating whether the corresponding generated image contains
“not-safe-for-work” (nsfw) content.
The call function to the pipeline for generation. Copied >>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO
>>> from diffusers import StableDiffusionDiffEditPipeline
>>> def download_image(url):
... response = requests.get(url)
... return PIL.Image.open(BytesIO(response.content)).convert("RGB")
>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
>>> init_image = download_image(img_url).resize((768, 768))
>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained(
... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.enable_model_cpu_offload()
>>> mask_prompt = "A bowl of fruits"
>>> prompt = "A bowl of pears"
>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt)
>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents
>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to
computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to
computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) —
prompt to be encoded
device — (torch.device):
torch device num_images_per_prompt (int) —
number of images that should be generated per prompt do_classifier_free_guidance (bool) —
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). prompt_embeds (torch.FloatTensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. lora_scale (float, optional) —
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) —
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) —
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or
None if safety checking could not be performed. Output class for Stable Diffusion pipelines.
VQModel The VQ-VAE model was introduced in Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike AutoencoderKL, the VQModel works in a quantized latent space. The abstract from the paper is: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” — where the latents are ignored when they are paired with a powerful autoregressive decoder — typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. VQModel class diffusers.VQModel < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 3 sample_size: int = 32 num_vq_embeddings: int = 256 norm_num_groups: int = 32 vq_embed_dim: Optional = None scaling_factor: float = 0.18215 norm_type: str = 'group' mid_block_add_attention = True lookup_from_codebook = False force_upcast = False ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) —
Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) —
Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) —
Tuple of block output channels. layers_per_block (int, optional, defaults to 1) — Number of layers per block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. norm_num_groups (int, optional, defaults to 32) — Number of groups for normalization layers. vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. scaling_factor (float, optional, defaults to 0.18215) —
The component-wise standard deviation of the trained latent space computed using the first batch of the
training set. This is used to scale the latent space to have unit variance when training the diffusion
model. The latents are scaled with the formula z = z * scaling_factor before being passed to the
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image
Synthesis with Latent Diffusion Models paper. norm_type (str, optional, defaults to "group") —
Type of normalization layer to use. Can be one of "group" or "spatial". A VQ-VAE model for decoding latent representations. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented
for all models (such as downloading or saving). forward < source > ( sample: FloatTensor return_dict: bool = True ) → VQEncoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) —
Whether or not to return a models.vq_model.VQEncoderOutput instead of a plain tuple. Returns
VQEncoderOutput or tuple
If return_dict is True, a VQEncoderOutput is returned, otherwise a plain tuple
is returned.
The VQModel forward method. VQEncoderOutput class diffusers.models.vq_model.VQEncoderOutput < source > ( latents: FloatTensor ) Parameters latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
The encoded output sample from the last layer of the model. Output of VQModel encoding method.
Stable Diffusion XL This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Stable Diffusion XL (SDXL) is a larger and more powerful iteration of the Stable Diffusion model, capable of producing higher resolution images. SDXL’s UNet is 3x larger and the model adds a second text encoder to the architecture. Depending on the hardware available to you, this can be very computationally intensive and it may not run on a consumer GPU like a Tesla T4. To help fit this larger model into memory and to speedup training, try enabling gradient_checkpointing, mixed_precision, and gradient_accumulation_steps. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and using bitsandbytes’ 8-bit optimizer. This guide will explore the train_text_to_image_sdxl.py training script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers
cd diffusers
pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/text_to_image
pip install -r requirements_sdxl.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config
write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_sdxl.py \
--mixed_precision="bf16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to training SDXL in this guide. --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --proportion_empty_prompts: the proportion of image prompts to replace with empty strings --timestep_bias_strategy: where (earlier vs. later) in the timestep to apply a bias, which can encourage the model to either learn low or high frequency details --timestep_bias_multiplier: the weight of the bias to apply to the timestep --timestep_bias_begin: the timestep to begin applying the bias --timestep_bias_end: the timestep to end applying the bias --timestep_bias_portion: the proportion of timesteps to apply the bias to Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting either epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_sdxl.py \
--snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support SDXL training. This guide will focus on the code that is unique to the SDXL training script. It starts by creating functions to tokenize the prompts to calculate the prompt embeddings, and to compute the image embeddings with the VAE. Next, you’ll a function to generate the timesteps weights depending on the number of timesteps and the timestep bias strategy to apply. Within the main() function, in addition to loading a tokenizer, the script loads a second tokenizer and text encoder because the SDXL architecture uses two of each: Copied tokenizer_one = AutoTokenizer.from_pretrained(
args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False
)
tokenizer_two = AutoTokenizer.from_pretrained(
args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False
)