text stringlengths 0 5.54k |
|---|
>>> import numpy as np |
>>> import torch |
>>> init_image = load_image( |
... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" |
... ) |
>>> init_image = init_image.resize((512, 512)) |
>>> generator = torch.Generator(device="cpu").manual_seed(1) |
>>> mask_image = load_image( |
... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" |
... ) |
>>> mask_image = mask_image.resize((512, 512)) |
>>> def make_canny_condition(image): |
... image = np.array(image) |
... image = cv2.Canny(image, 100, 200) |
... image = image[:, :, None] |
... image = np.concatenate([image, image, image], axis=2) |
... image = Image.fromarray(image) |
... return image |
>>> control_image = make_canny_condition(init_image) |
>>> controlnet = ControlNetModel.from_pretrained( |
... "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16 |
... ) |
>>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( |
... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 |
... ) |
>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) |
>>> pipe.enable_model_cpu_offload() |
>>> # generate image |
>>> image = pipe( |
... "a handsome man with ray-ban sunglasses", |
... num_inference_steps=20, |
... generator=generator, |
... eta=1.0, |
... image=init_image, |
... mask_image=mask_image, |
... control_image=control_image, |
... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — |
When "auto", halves the input to the attention heads, so attention will be computed in two steps. If |
"max", maximum amount of memory will be saved by running only one slice at a time. If a number is |
provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim |
must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor |
in slices to compute attention in several steps. For more than one attention head, the computation is performed |
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch |
2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable |
this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch |
>>> from diffusers import StableDiffusionPipeline |
>>> pipe = StableDiffusionPipeline.from_pretrained( |
... "runwayml/stable-diffusion-v1-5", |
... torch_dtype=torch.float16, |
... use_safetensors=True, |
... ) |
>>> prompt = "a photo of an astronaut riding a horse on mars" |
>>> pipe.enable_attention_slicing() |
>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is |
computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to |
computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — |
Override the default None operator for use as op argument to the |
memory_efficient_attention() |
function of xFormers. Enable memory efficient attention from xFormers. When this |
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed |
up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes |
precedent. Examples: Copied >>> import torch |
>>> from diffusers import DiffusionPipeline |
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp |
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) |
>>> pipe = pipe.to("cuda") |
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) |
>>> # Workaround for not accepting attention shape using VAE for Flash Attention |
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_enco... |
Can be either one of the following or a list of them: |
A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a |
pretrained model hosted on the Hub. |
A path to a directory (for example ./my_text_inversion_directory/) containing the textual |
inversion weights. |
A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. |
A torch state |
dict. |
token (str or List[str], optional) — |
Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a |
list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — |
Frozen text-encoder (clip-vit-large-patch14). |
If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — |
A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — |
Name of a custom weight file. This should be used when: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.