text stringlengths 0 5.54k |
|---|
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If |
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") β |
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. cross_attention_kwargs (dict, optional) β |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in |
self.processor. clip_skip (int, optional) β |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) β |
A function that calls at the end of each denoising steps during the inference. The function is called |
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by |
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) β |
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list |
will be passed as callback_kwargs argument. You will only be able to include variables listed in the |
._callback_tensor_inputs attribute of your pipeline class. Returns |
StableDiffusionPipelineOutput or tuple |
If return_dict is True, StableDiffusionPipelineOutput is returned, |
otherwise a tuple is returned where the first element is a list with the generated images. |
The call function to the pipeline for generation. Examples: Copied >>> import torch |
>>> import requests |
>>> from PIL import Image |
>>> from diffusers import StableDiffusionDepth2ImgPipeline |
>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( |
... "stabilityai/stable-diffusion-2-depth", |
... torch_dtype=torch.float16, |
... ) |
>>> pipe.to("cuda") |
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" |
>>> init_image = Image.open(requests.get(url, stream=True).raw) |
>>> prompt = "two tigers" |
>>> n_propmt = "bad, deformed, ugly, bad anotomy" |
>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") β |
When "auto", halves the input to the attention heads, so attention will be computed in two steps. If |
"max", maximum amount of memory will be saved by running only one slice at a time. If a number is |
provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim |
must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor |
in slices to compute attention in several steps. For more than one attention head, the computation is performed |
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. β οΈ Donβt enable attention slicing if youβre already using scaled_dot_product_attention (SDPA) from PyTorch |
2.0 or xFormers. These attention computations are already very memory efficient so you wonβt need to enable |
this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch |
>>> from diffusers import StableDiffusionPipeline |
>>> pipe = StableDiffusionPipeline.from_pretrained( |
... "runwayml/stable-diffusion-v1-5", |
... torch_dtype=torch.float16, |
... use_safetensors=True, |
... ) |
>>> prompt = "a photo of an astronaut riding a horse on mars" |
>>> pipe.enable_attention_slicing() |
>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is |
computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) β |
Override the default None operator for use as op argument to the |
memory_efficient_attention() |
function of xFormers. Enable memory efficient attention from xFormers. When this |
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed |
up during training is not guaranteed. β οΈ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes |
precedent. Examples: Copied >>> import torch |
>>> from diffusers import DiffusionPipeline |
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp |
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) |
>>> pipe = pipe.to("cuda") |
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) |
>>> # Workaround for not accepting attention shape using VAE for Flash Attention |
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_enco... |
Can be either one of the following or a list of them: |
A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a |
pretrained model hosted on the Hub. |
A path to a directory (for example ./my_text_inversion_directory/) containing the textual |
inversion weights. |
A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. |
A torch state |
dict. |
token (str or List[str], optional) β |
Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a |
list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) β |
Frozen text-encoder (clip-vit-large-patch14). |
If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) β |
A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) β |
Name of a custom weight file. This should be used when: |
The saved textual inversion file is in π€ Diffusers format, but was saved under a specific weight |
name such as text_inv.bin. |
The saved textual inversion file is in the Automatic1111 format. |
cache_dir (Union[str, os.PathLike], optional) β |
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache |
is not used. force_download (bool, optional, defaults to False) β |
Whether or not to force the (re-)download of the model weights and configuration files, overriding the |
cached versions if they exist. resume_download (bool, optional, defaults to False) β |
Whether or not to resume downloading the model weights and configuration files. If set to False, any |
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) β |
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) β |
Whether to only load local model weights and configuration files or not. If set to True, the model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.