text stringlengths 0 5.54k |
|---|
Optimizing for speed |
The simplest optimization to run IF faster is to move all model components to the GPU. |
Copied |
pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) |
pipe.to("cuda") |
You can also run the diffusion process for a shorter number of timesteps. |
This can either be done with the num_inference_steps argument |
Copied |
pipe("<prompt>", num_inference_steps=30) |
Or with the timesteps argument |
Copied |
from diffusers.pipelines.deepfloyd_if import fast27_timesteps |
pipe("<prompt>", timesteps=fast27_timesteps) |
When doing image variation or inpainting, you can also decrease the number of timesteps |
with the strength argument. The strength argument is the amount of noise to add to |
the input image which also determines how many steps to run in the denoising process. |
A smaller number will vary the image less but run faster. |
Copied |
pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) |
pipe.to("cuda") |
image = pipe(image=image, prompt="<prompt>", strength=0.3).images |
You can also use torch.compile. Note that we have not exhaustively tested torch.compile |
with IF and it might not give expected results. |
Copied |
import torch |
pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) |
pipe.to("cuda") |
pipe.text_encoder = torch.compile(pipe.text_encoder) |
pipe.unet = torch.compile(pipe.unet) |
Optimizing for memory |
When optimizing for GPU memory, we can use the standard diffusers cpu offloading APIs. |
Either the model based CPU offloading, |
Copied |
pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) |
pipe.enable_model_cpu_offload() |
or the more aggressive layer based CPU offloading. |
Copied |
pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) |
pipe.enable_sequential_cpu_offload() |
Additionally, T5 can be loaded in 8bit precision |
Copied |
from transformers import T5EncoderModel |
text_encoder = T5EncoderModel.from_pretrained( |
"DeepFloyd/IF-I-IF-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" |
) |
from diffusers import DiffusionPipeline |
pipe = DiffusionPipeline.from_pretrained( |
"DeepFloyd/IF-I-IF-v1.0", |
text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder |
unet=None, |
device_map="auto", |
) |
prompt_embeds, negative_embeds = pipe.encode_prompt("<prompt>") |
For CPU RAM constrained machines like google colab free tier where we can’t load all |
model components to the CPU at once, we can manually only load the pipeline with |
the text encoder or unet when the respective model components are needed. |
Copied |
from diffusers import IFPipeline, IFSuperResolutionPipeline |
import torch |
import gc |
from transformers import T5EncoderModel |
from diffusers.utils import pt_to_pil |
text_encoder = T5EncoderModel.from_pretrained( |
"DeepFloyd/IF-I-IF-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" |
) |
# text to image |
pipe = DiffusionPipeline.from_pretrained( |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.