text stringlengths 0 5.54k |
|---|
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — |
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — |
Corresponds to parameter eta (η) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) — |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not |
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If |
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. |
ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — |
Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. |
Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding |
if do_classifier_free_guidance is set to True. |
If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — |
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. cross_attention_kwargs (dict, optional) — |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in |
self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — |
The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added |
to the residual in the original unet. If multiple ControlNets are specified in init, you can set |
the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — |
The ControlNet encoder tries to recognize the content of the input image even if you remove all |
prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — |
The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — |
The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — |
A function that calls at the end of each denoising steps during the inference. The function is called |
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by |
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — |
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list |
will be passed as callback_kwargs argument. You will only be able to include variables listed in the |
._callback_tensor_inputs attribute of your pipeine class. Returns |
StableDiffusionPipelineOutput or tuple |
If return_dict is True, StableDiffusionPipelineOutput is returned, |
otherwise a tuple is returned where the first element is a list with the generated images and the |
second element is a list of bools indicating whether the corresponding generated image contains |
“not-safe-for-work” (nsfw) content. |
The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate |
>>> from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler |
>>> from diffusers.utils import load_image |
>>> import numpy as np |
>>> import torch |
>>> import cv2 |
>>> from PIL import Image |
>>> # download an image |
>>> image = load_image( |
... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" |
... ) |
>>> np_image = np.array(image) |
>>> # get canny image |
>>> np_image = cv2.Canny(np_image, 100, 200) |
>>> np_image = np_image[:, :, None] |
>>> np_image = np.concatenate([np_image, np_image, np_image], axis=2) |
>>> canny_image = Image.fromarray(np_image) |
>>> # load control net and stable diffusion v1-5 |
>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) |
>>> pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( |
... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 |
... ) |
>>> # speed up diffusion process with faster scheduler and memory optimization |
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) |
>>> pipe.enable_model_cpu_offload() |
>>> # generate image |
>>> generator = torch.manual_seed(0) |
>>> image = pipe( |
... "futuristic-looking woman", |
... num_inference_steps=20, |
... generator=generator, |
... image=image, |
... control_image=canny_image, |
... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — |
When "auto", halves the input to the attention heads, so attention will be computed in two steps. If |
"max", maximum amount of memory will be saved by running only one slice at a time. If a number is |
provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim |
must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor |
in slices to compute attention in several steps. For more than one attention head, the computation is performed |
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch |
2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable |
this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch |
>>> from diffusers import StableDiffusionPipeline |
>>> pipe = StableDiffusionPipeline.from_pretrained( |
... "runwayml/stable-diffusion-v1-5", |
... torch_dtype=torch.float16, |
... use_safetensors=True, |
... ) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.