text stringlengths 0 5.54k |
|---|
Whether to use the invisible_watermark library to |
watermark output images. If not defined, it defaults to True if the package is installed; otherwise no |
watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods |
implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — |
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. prompt_2 (str or List[str], optional) — |
The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is |
used in both text-encoders. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — |
List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): |
The ControlNet input condition to provide guidance to the unet for generation. If the type is |
specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be |
accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height |
and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in |
init, images must be passed as a list such that each element of the list can be correctly batched for |
input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The height in pixels of the generated image. Anything below 512 pixels won’t work well for |
stabilityai/stable-diffusion-xl-base-1.0 |
and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The width in pixels of the generated image. Anything below 512 pixels won’t work well for |
stabilityai/stable-diffusion-xl-base-1.0 |
and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — |
A higher guidance scale value encourages the model to generate images closely linked to the text |
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — |
The prompt or prompts to guide what to not include in image generation. If not defined, you need to |
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). negative_prompt_2 (str or List[str], optional) — |
The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2 |
and text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — |
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — |
Corresponds to parameter eta (η) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) — |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not |
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If |
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If |
not provided, pooled text embeddings are generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt |
weighting). If not provided, pooled negative_prompt_embeds are generated from negative_prompt input |
argument. |
ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — |
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. cross_attention_kwargs (dict, optional) — |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in |
self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — |
The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added |
to the residual in the original unet. If multiple ControlNets are specified in init, you can set |
the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — |
The ControlNet encoder tries to recognize the content of the input image even if you remove all |
prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — |
The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — |
The percentage of total steps at which the ControlNet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — |
If original_size is not the same as target_size the image will appear to be down- or upsampled. |
original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as |
explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — |
crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position |
crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting |
crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — |
For most cases, target_size should be set to the desired height and width of the generated image. If |
not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in |
section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — |
To negatively condition the generation process based on a specific image resolution. Part of SDXL’s |
micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. For more |
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — |
To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s |
micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. For more |
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — |
To negatively condition the generation process based on a target image resolution. It should be as same |
as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. For more |
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — |
A function that calls at the end of each denoising steps during the inference. The function is called |
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by |
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — |
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list |
will be passed as callback_kwargs argument. You will only be able to include variables listed in the |
._callback_tensor_inputs attribute of your pipeine class. Returns |
StableDiffusionPipelineOutput or tuple |
If return_dict is True, StableDiffusionPipelineOutput is returned, |
otherwise a tuple is returned containing the output images. |
The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate |
>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL |
>>> from diffusers.utils import load_image |
>>> import numpy as np |
>>> import torch |
>>> import cv2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.