text stringlengths 0 5.54k |
|---|
Copied |
generator = torch.Generator("cuda").manual_seed(0) |
image = pipe(prompt, generator=generator).images[0] |
image |
Cool, this is almost three times as fast for arguably the same image quality. |
We strongly suggest always running your pipelines in float16 as so far we have very rarely seen degradations in quality because of it. |
Next, let’s see if we need to use 50 inference steps or whether we could use significantly fewer. The number of inference steps is associated with the denoising scheduler we use. Choosing a more efficient scheduler could help us decrease the number of steps. |
Let’s have a look at all the schedulers the stable diffusion pipeline is compatible with. |
Copied |
pipe.scheduler.compatibles |
Copied |
[diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, |
diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, |
diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, |
diffusers.schedulers.scheduling_pndm.PNDMScheduler, |
diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, |
diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, |
diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, |
diffusers.schedulers.scheduling_ddpm.DDPMScheduler, |
diffusers.schedulers.scheduling_ddim.DDIMScheduler] |
Cool, that’s a lot of schedulers. ... |
🧨 Diffusers is constantly adding a bunch of novel schedulers/samplers that can be used with Stable Diffusion. For more information, we recommend taking a look at the official documentation here. ... |
Alright, right now Stable Diffusion is using the PNDMScheduler which usually requires around 50 inference steps. However, other schedulers such as DPMSolverMultistepScheduler or DPMSolverSinglestepScheduler seem to get away with just 20 to 25 inference steps. Let’s try them out. ... |
You can set a new scheduler by making use of the from_config function. ... |
Copied |
from diffusers import DPMSolverMultistepScheduler |
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) |
Now, let’s try to reduce the number of inference steps to just 20. |
Copied |
generator = torch.Generator("cuda").manual_seed(0) |
image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] |
image |
The image now does look a little different, but it’s arguably still of equally high quality. We now cut inference time to just 4 seconds though 😍. |
Memory Optimization |
Less memory used in generation indirectly implies more speed, since we’re often trying to maximize how many images we can generate per second. Usually, the more images per inference run, the more images per second too. ... |
The easiest way to see how many images we can generate at once is to simply try it out, and see when we get a “Out-of-memory (OOM)” error. |
We can run batched inference by simply passing a list of prompts and generators. Let’s define a quick function that generates a batch for us. |
Copied |
def get_inputs(batch_size=1): |
generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] |
prompts = batch_size * [prompt] |
num_inference_steps = 20 |
return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} |
This function returns a list of prompts and a list of generators, so we can reuse the generator that produced a result we like. |
We also need a method that allows us to easily display a batch of images. |
Copied |
from PIL import Image |
def image_grid(imgs, rows=2, cols=2): |
w, h = imgs[0].size |
grid = Image.new('RGB', size=(cols*w, rows*h)) |
for i, img in enumerate(imgs): |
grid.paste(img, box=(i%cols*w, i//cols*h)) |
return grid |
Cool, let’s see how much memory we can use starting with batch_size=4. |
Copied |
images = pipe(**get_inputs(batch_size=4)).images |
image_grid(images) |
Going over a batch_size of 4 will error out in this notebook (assuming we are running it on a T4 GPU). Also, we can see we only generate slightly more images per second (3.75s/image) compared to 4s/image previously. ... |
However, the community has found some nice tricks to improve the memory constraints further. After stable diffusion was released, the community found improvements within days and shared them freely over GitHub - open-source at its finest! I believe the original idea came from this GitHub thread. ... |
By far most of the memory is taken up by the cross-attention layers. Instead of running this operation in batch, one can run it sequentially to save a significant amount of memory. ... |
It can easily be enabled by calling enable_attention_slicing as is documented here. ... |
Copied |
pipe.enable_attention_slicing() |
Great, now that attention slicing is enabled, let’s try to double the batch size again, going for batch_size=8. |
Copied |
images = pipe(**get_inputs(batch_size=8)).images |
image_grid(images, rows=2, cols=4) |
Nice, it works. However, the speed gain is again not very big (it might however be much more significant on other GPUs). |
We’re at roughly 3.5 seconds per image 🔥 which is probably the fastest we can be with a simple T4 without sacrificing quality. |
Next, let’s look into how to improve the quality! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.