text stringlengths 0 5.54k |
|---|
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — |
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. |
instead. prompt_2 (str or List[str], optional) — |
The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is |
used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — |
List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): |
The initial image will be used as the starting point for the image generation process. Can also accept |
image latents as image, if passing latents directly, it will not be encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — |
List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): |
The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If |
the type is specified as Torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can |
also be accepted as an image. The dimensions of the output image defaults to image’s dimensions. If |
height and/or width are passed, image is resized according to them. If multiple ControlNets are |
specified in init, images must be passed as a list such that each element of the list can be correctly |
batched for input to a single controlnet. height (int, optional, defaults to the size of control_image) — |
The height in pixels of the generated image. Anything below 512 pixels won’t work well for |
stabilityai/stable-diffusion-xl-base-1.0 |
and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to the size of control_image) — |
The width in pixels of the generated image. Anything below 512 pixels won’t work well for |
stabilityai/stable-diffusion-xl-base-1.0 |
and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. strength (float, optional, defaults to 0.3) — |
Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image |
will be used as a starting point, adding more noise to it the larger the strength. The number of |
denoising steps depends on the amount of noise initially added. When strength is 1, added noise will |
be maximum and the denoising process will run for the full number of iterations specified in |
num_inference_steps. guidance_scale (float, optional, defaults to 7.5) — |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, |
usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). negative_prompt_2 (str or List[str], optional) — |
The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and |
text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — |
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — |
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to |
schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — |
One or a list of torch generator(s) |
to make generation deterministic. latents (torch.FloatTensor, optional) — |
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. pooled_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. |
If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt |
input argument. output_type (str, optional, defaults to "pil") — |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. cross_attention_kwargs (dict, optional) — |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under |
self.processor in |
diffusers.models.attention_processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — |
The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added |
to the residual in the original unet. If multiple ControlNets are specified in init, you can set the |
corresponding scale as a list. guess_mode (bool, optional, defaults to False) — |
In this mode, the ControlNet encoder will try best to recognize the content of the input image even if |
you remove all prompts. The guidance_scale between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — |
The percentage of total steps at which the controlnet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — |
The percentage of total steps at which the controlnet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — |
If original_size is not the same as target_size the image will appear to be down- or upsampled. |
original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as |
explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — |
crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position |
crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting |
crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — |
For most cases, target_size should be set to the desired height and width of the generated image. If |
not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in |
section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — |
To negatively condition the generation process based on a specific image resolution. Part of SDXL’s |
micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. For more |
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — |
To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s |
micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. For more |
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — |
To negatively condition the generation process based on a target image resolution. It should be as same |
as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. For more |
information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — |
Used to simulate an aesthetic score of the generated image by influencing the positive text condition. |
Part of SDXL’s micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — |
Part of SDXL’s micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. Can be used to |
simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.