text stringlengths 0 5.54k |
|---|
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. pooled_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. |
If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt |
input argument. lora_scale (float, optional) — |
A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLControlNetInpaintPipeline class diffusers.StableDiffusionXLControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetModel scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — |
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — |
Frozen text-encoder. Stable Diffusion XL uses the text portion of |
CLIP, specifically |
the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — |
Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of |
CLIP, |
specifically the |
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k |
variant. tokenizer (CLIPTokenizer) — |
Tokenizer of class |
CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — |
Second Tokenizer of class |
CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — |
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. |
instead. prompt_2 (str or List[str], optional) — |
The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is |
used in both text-encoders image (PIL.Image.Image) — |
Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will |
be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — |
Image, or tensor representing an image batch, to mask image. White pixels in the mask will be |
repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted |
to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) |
instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The width in pixels of the generated image. strength (float, optional, defaults to 0.9999) — |
Conceptually, indicates how much to transform the masked portion of the reference image. Must be |
between 0 and 1. image will be used as a starting point, adding more noise to it the larger the |
strength. The number of denoising steps depends on the amount of noise initially added. When |
strength is 1, added noise will be maximum and the denoising process will run for the full number of |
iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked |
portion of the reference image. Note that in the case of denoising_start being declared as an |
integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. denoising_start (float, optional) — |
When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be |
bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and |
it is assumed that the passed image is a partly denoised image. Note that when this is specified, |
strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline |
is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image |
Output. denoising_end (float, optional) — |
When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be |
completed before it is intentionally prematurely terminated. As a result, the returned sample will |
still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be |
denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the |
final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline |
forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image |
Output. guidance_scale (float, optional, defaults to 7.5) — |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, |
usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). negative_prompt_2 (str or List[str], optional) — |
The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and |
text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. pooled_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. |
If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt |
input argument. num_images_per_prompt (int, optional, defaults to 1) — |
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — |
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to |
schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — |
One or a list of torch generator(s) |
to make generation deterministic. latents (torch.FloatTensor, optional) — |
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. cross_attention_kwargs (dict, optional) — |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under |
self.processor in |
diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — |
If original_size is not the same as target_size the image will appear to be down- or upsampled. |
original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as |
explained in section 2.2 of |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.