text
stringlengths
0
5.54k
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is
computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to
computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) β€”
Override the default None operator for use as op argument to the
memory_efficient_attention()
function of xFormers. Enable memory efficient attention from xFormers. When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes
precedent. Examples: Copied >>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_enco...
Can be either one of the following or a list of them:
A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a
pretrained model hosted on the Hub.
A path to a directory (for example ./my_text_inversion_directory/) containing the textual
inversion weights.
A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights.
A torch state
dict.
token (str or List[str], optional) β€”
Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a
list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) β€”
Frozen text-encoder (clip-vit-large-patch14).
If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) β€”
A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) β€”
Name of a custom weight file. This should be used when:
The saved textual inversion file is in πŸ€— Diffusers format, but was saved under a specific weight
name such as text_inv.bin.
The saved textual inversion file is in the Automatic1111 format.
cache_dir (Union[str, os.PathLike], optional) β€”
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. force_download (bool, optional, defaults to False) β€”
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. resume_download (bool, optional, defaults to False) β€”
Whether or not to resume downloading the model weights and configuration files. If set to False, any
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) β€”
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) β€”
Whether to only load local model weights and configuration files or not. If set to True, the model
won’t be downloaded from the Hub. token (str or bool, optional) β€”
The token to use as HTTP bearer authorization for remote files. If True, the token generated from
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") β€”
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. subfolder (str, optional, defaults to "") β€”
The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) β€”
Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both πŸ€— Diffusers and
Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in πŸ€— Diffusers format: Copied from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("sd-concepts-library/cat-toy")
prompt = "A <cat-toy> backpack"
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")
prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], option...
prompt to be encoded
device β€” (torch.device):
torch device num_images_per_prompt (int) β€”
number of images that should be generated per prompt do_classifier_free_guidance (bool) β€”
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β€”
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. lora_scale (float, optional) β€”
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β€”
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionControlNetInpaintPipeline class diffusers.StableDiffusionControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPToken...
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”