text stringlengths 0 5.54k |
|---|
pipe = StableDiffusionXLPipeline.from_pretrained( |
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 |
).to("cuda") |
# Run the attention ops without efficiency. |
pipe.unet.set_default_attn_processor() |
pipe.vae.set_default_attn_processor() |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
image = pipe(prompt, num_inference_steps=30).images[0] bfloat16 reduces the latency from 7.36 seconds to 4.63 seconds: Why bfloat16? Using a reduced numerical precision (such as float16, bfloat16) to run inference doesn’t affect the generation quality but significantly improves latency. The benefits of using the bfloat16 numerical precision as compared to float16 are hardware-dependent. Modern generations of GPUs tend to favor bfloat16. Furthermore, in our experiments, we bfloat16 to be much more resilient when used with quantization in comparison to float16. We have a dedicated guide for running inference in a reduced precision. Running attention efficiently Attention blocks are intensive to run. But with PyTorch’s scaled_dot_product_attention, we can run them efficiently. Copied from diffusers import StableDiffusionXLPipeline |
import torch |
pipe = StableDiffusionXLPipeline.from_pretrained( |
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 |
).to("cuda") |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
image = pipe(prompt, num_inference_steps=30).images[0] scaled_dot_product_attention improves the latency from 4.63 seconds to 3.31 seconds. Use faster kernels with torch.compile Compile the UNet and the VAE to benefit from the faster kernels. First, configure a few compiler flags: Copied from diffusers import StableDiffusionXLPipeline |
import torch |
torch._inductor.config.conv_1x1_as_mm = True |
torch._inductor.config.coordinate_descent_tuning = True |
torch._inductor.config.epilogue_fusion = False |
torch._inductor.config.coordinate_descent_check_all_directions = True For the full list of compiler flags, refer to this file. It is also important to change the memory layout of the UNet and the VAE to “channels_last” when compiling them. This ensures maximum speed: Copied pipe.unet.to(memory_format=torch.channels_last) |
pipe.vae.to(memory_format=torch.channels_last) Then, compile and perform inference: Copied # Compile the UNet and VAE. |
pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) |
pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
# First call to `pipe` will be slow, subsequent ones will be faster. |
image = pipe(prompt, num_inference_steps=30).images[0] torch.compile offers different backends and modes. As we’re aiming for maximum inference speed, we opt for the inductor backend using the “max-autotune”. “max-autotune” uses CUDA graphs and optimizes the compilation graph specifically for latency. Specifying fullgraph to be True ensures that there are no graph breaks in the underlying model, ensuring the fullest potential of torch.compile. Using SDPA attention and compiling both the UNet and VAE reduces the latency from 3.31 seconds to 2.54 seconds. Combine the projection matrices of attention Both the UNet and the VAE used in SDXL make use of Transformer-like blocks. A Transformer block consists of attention blocks and feed-forward blocks. In an attention block, the input is projected into three sub-spaces using three different projection matrices – Q, K, and V. In the naive implementation, these projections are performed separately on the input. But we can horizontally combine the projection matrices into a single matrix and perform the projection in one shot. This increases the size of the matmuls of the input projections and improves the impact of quantization (to be discussed next). Enabling this kind of computation in Diffusers just takes a single line of code: Copied pipe.fuse_qkv_projections() It provides a minor boost from 2.54 seconds to 2.52 seconds. Support for fuse_qkv_projections() is limited and experimental. As such, it’s not available for many non-SD pipelines such as Kandinsky. You can refer to this PR to get an idea about how to support this kind of computation. Dynamic quantization Aapply dynamic int8 quantization to both the UNet and the VAE. This is because quantization adds additional conversion overhead to the model that is hopefully made up for by faster matmuls (dynamic quantization). If the matmuls are too small, these techniques may degrade performance. Through experimentation, we found that certain linear layers in the UNet and the VAE don’t benefit from dynamic int8 quantization. You can check out the full code for filtering those layers here (referred to as dynamic_quant_filter_fn below). You will leverage the ultra-lightweight pure PyTorch library torchao to use its user-friendly APIs for quantization. First, configure all the compiler tags: Copied from diffusers import StableDiffusionXLPipeline |
import torch |
# Notice the two new flags at the end. |
torch._inductor.config.conv_1x1_as_mm = True |
torch._inductor.config.coordinate_descent_tuning = True |
torch._inductor.config.epilogue_fusion = False |
torch._inductor.config.coordinate_descent_check_all_directions = True |
torch._inductor.config.force_fuse_int_mm_with_mul = True |
torch._inductor.config.use_mixed_mm = True Define the filtering functions: Copied def dynamic_quant_filter_fn(mod, *args): |
return ( |
isinstance(mod, torch.nn.Linear) |
and mod.in_features > 16 |
and (mod.in_features, mod.out_features) |
not in [ |
(1280, 640), |
(1920, 1280), |
(1920, 640), |
(2048, 1280), |
(2048, 2560), |
(2560, 1280), |
(256, 128), |
(2816, 1280), |
(320, 640), |
(512, 1536), |
(512, 256), |
(512, 512), |
(640, 1280), |
(640, 1920), |
(640, 320), |
(640, 5120), |
(640, 640), |
(960, 320), |
(960, 640), |
] |
) |
def conv_filter_fn(mod, *args): |
return ( |
isinstance(mod, torch.nn.Conv2d) and mod.kernel_size == (1, 1) and 128 in [mod.in_channels, mod.out_channels] |
) Then apply all the optimizations discussed so far: Copied # SDPA + bfloat16. |
pipe = StableDiffusionXLPipeline.from_pretrained( |
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 |
).to("cuda") |
# Combine attention projection matrices. |
pipe.fuse_qkv_projections() |
# Change the memory layout. |
pipe.unet.to(memory_format=torch.channels_last) |
pipe.vae.to(memory_format=torch.channels_last) Since this quantization support is limited to linear layers only, we also turn suitable pointwise convolution layers into linear layers to maximize the benefit. Copied from torchao import swap_conv2d_1x1_to_linear |
swap_conv2d_1x1_to_linear(pipe.unet, conv_filter_fn) |
swap_conv2d_1x1_to_linear(pipe.vae, conv_filter_fn) Apply dynamic quantization: Copied from torchao import apply_dynamic_quant |
apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn) |
apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn) Finally, compile and perform inference: Copied pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) |
pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
image = pipe(prompt, num_inference_steps=30).images[0] Applying dynamic quantization improves the latency from 2.52 seconds to 2.43 seconds. |
Custom Diffusion Custom Diffusion is a training technique for personalizing image generation models. Like Textual Inversion, DreamBooth, and LoRA, Custom Diffusion only requires a few (~4-5) example images. This technique works by only training weights in the cross-attention layers, and it uses a special word to represent the newly learned concept. Custom Diffusion is unique because it can also learn multiple concepts at the same time. If you’re training on a GPU with limited vRAM, you should try enabling xFormers with --enable_xformers_memory_efficient_attention for faster training with lower vRAM requirements (16GB). To save even more memory, add --set_grads_to_none in the training argument to set the gradients to None instead of zero (this option can cause some issues, so if you experience any, try removing this parameter). This guide will explore the train_custom_diffusion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers |
cd diffusers |
pip install . Navigate to the example folder with the training script and install the required dependencies: Copied cd examples/custom_diffusion |
pip install -r requirements.txt |
pip install clip-retrieval 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config |
write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script contains all the parameters to help you customize your training run. These are found in the parse_args() function. The function comes with default values, but you can also set your own values in the training command if you’d like. For example, to change the resolution of the input image: Copied accelerate launch train_custom_diffusion.py \ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.