text stringlengths 0 5.54k |
|---|
second element is a list of bools indicating whether the corresponding generated image contains |
“not-safe-for-work” (nsfw) content. |
The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate |
>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler |
>>> from diffusers.utils import load_image |
>>> import numpy as np |
>>> import torch |
>>> import cv2 |
>>> from PIL import Image |
>>> # download an image |
>>> image = load_image( |
... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" |
... ) |
>>> image = np.array(image) |
>>> # get canny image |
>>> image = cv2.Canny(image, 100, 200) |
>>> image = image[:, :, None] |
>>> image = np.concatenate([image, image, image], axis=2) |
>>> canny_image = Image.fromarray(image) |
>>> # load control net and stable diffusion v1-5 |
>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) |
>>> pipe = StableDiffusionControlNetPipeline.from_pretrained( |
... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 |
... ) |
>>> # speed up diffusion process with faster scheduler and memory optimization |
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) |
>>> # remove following line if xformers is not installed |
>>> pipe.enable_xformers_memory_efficient_attention() |
>>> pipe.enable_model_cpu_offload() |
>>> # generate image |
>>> generator = torch.manual_seed(0) |
>>> image = pipe( |
... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image |
... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — |
When "auto", halves the input to the attention heads, so attention will be computed in two steps. If |
"max", maximum amount of memory will be saved by running only one slice at a time. If a number is |
provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim |
must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor |
in slices to compute attention in several steps. For more than one attention head, the computation is performed |
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch |
2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable |
this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch |
>>> from diffusers import StableDiffusionPipeline |
>>> pipe = StableDiffusionPipeline.from_pretrained( |
... "runwayml/stable-diffusion-v1-5", |
... torch_dtype=torch.float16, |
... use_safetensors=True, |
... ) |
>>> prompt = "a photo of an astronaut riding a horse on mars" |
>>> pipe.enable_attention_slicing() |
>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is |
computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to |
computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — |
Override the default None operator for use as op argument to the |
memory_efficient_attention() |
function of xFormers. Enable memory efficient attention from xFormers. When this |
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed |
up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes |
precedent. Examples: Copied >>> import torch |
>>> from diffusers import DiffusionPipeline |
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp |
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) |
>>> pipe = pipe.to("cuda") |
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) |
>>> # Workaround for not accepting attention shape using VAE for Flash Attention |
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_enco... |
Can be either one of the following or a list of them: |
A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a |
pretrained model hosted on the Hub. |
A path to a directory (for example ./my_text_inversion_directory/) containing the textual |
inversion weights. |
A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. |
A torch state |
dict. |
token (str or List[str], optional) — |
Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a |
list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — |
Frozen text-encoder (clip-vit-large-patch14). |
If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — |
A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — |
Name of a custom weight file. This should be used when: |
The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight |
name such as text_inv.bin. |
The saved textual inversion file is in the Automatic1111 format. |
cache_dir (Union[str, os.PathLike], optional) — |
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache |
is not used. force_download (bool, optional, defaults to False) — |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.