text
stringlengths
0
5.54k
The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the
list will be passed as callback_kwargs argument. You will only be able to include variables listed in
the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) β€”
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) β€”
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list
will be passed as callback_kwargs argument. You will only be able to include variables listed in the
._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusions import WuerstchenCombinedPipeline
>>> pipe = WuerstchenCombinedPipeline.from_pretrained("warp-ai/Wuerstchen", torch_dtype=torch.float16).to(
... "cuda"
... )
>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet"
>>> images = pipe(prompt=prompt) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using πŸ€—
Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a
GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis.
Memory savings are higher than using enable_model_cpu_offload, but performance is lower. WuerstchenPriorPipeline class diffusers.WuerstchenPriorPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel prior: WuerstchenPrior scheduler: DDPMWuerstchenScheduler latent_mean: float = 42.0 latent_std: float = 1.0 resolution_multiple: float = 42.67 ) Parameters prior (Prior) β€”
The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) β€”
Frozen text-encoder. tokenizer (CLIPTokenizer) β€”
Tokenizer of class
CLIPTokenizer. scheduler (DDPMWuerstchenScheduler) β€”
A scheduler to be used in combination with prior to generate image embedding. latent_mean (β€˜float’, optional, defaults to 42.0) β€”
Mean value for latent diffusers. latent_std (β€˜float’, optional, defaults to 1.0) β€”
Standard value for latent diffusers. resolution_multiple (β€˜float’, optional, defaults to 42.67) β€”
Default resolution for multiple images generated. Pipeline for generating image prior for Wuerstchen. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: int = 1024 width: int = 1024 num_inference_steps: int = 60 timesteps: List = None guidance_scale: float = 8.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) β€”
The prompt or prompts to guide the image generation. height (int, optional, defaults to 1024) β€”
The height in pixels of the generated image. width (int, optional, defaults to 1024) β€”
The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 60) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. timesteps (List[int], optional) β€”
Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps
timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 8.0) β€”
Guidance scale as defined in Classifier-Free Diffusion Guidance.
decoder_guidance_scale is defined as w of equation 2. of Imagen
Paper. Guidance scale is enabled by setting
decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely
linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) β€”
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. num_images_per_prompt (int, optional, defaults to 1) β€”
The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) β€”
One or a list of torch generator(s)
to make generation deterministic. latents (torch.FloatTensor, optional) β€”
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") β€”
The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np"
(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) β€”
Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) β€”
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) β€”
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list
will be passed as callback_kwargs argument. You will only be able to include variables listed in the
._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch
>>> from diffusers import WuerstchenPriorPipeline
>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained(
... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16
... ).to("cuda")
>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet"
>>> prior_output = pipe(prompt) WuerstchenPriorPipelineOutput class diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput < source > ( image_embeddings: Union ) Parameters image_embeddings (torch.FloatTensor or np.ndarray) β€”
Prior image embeddings for text prompt Output class for WuerstchenPriorPipeline. WuerstchenDecoderPipeline class diffusers.WuerstchenDecoderPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel latent_dim_scale: float = 10.67 ) Parameters tokenizer (CLIPTokenizer) β€”
The CLIP tokenizer. text_encoder (CLIPTextModel) β€”
The CLIP text encoder. decoder (WuerstchenDiffNeXt) β€”
The WuerstchenDiffNeXt unet decoder. vqgan (PaellaVQModel) β€”
The VQGAN model. scheduler (DDPMWuerstchenScheduler) β€”
A scheduler to be used in combination with prior to generate image embedding. latent_dim_scale (float, optional, defaults to 10.67) β€”
Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are
height=24 and width=24, the VQ latent shape needs to be height=int(2410.67)=256 and
width=int(2410.67)=256 in order to match the training conditions. Pipeline for generating images from the Wuerstchen model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeddings: Union prompt: Union = None num_inference_steps: int = 12 timesteps: Optional = None guidance_scale: float = 0.0 negative_prompt: Union = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters image_embedding (torch.FloatTensor or List[torch.FloatTensor]) β€”
Image Embeddings either extracted from an image or generated by a Prior Model. prompt (str or List[str]) β€”
The prompt or prompts to guide the image generation. num_inference_steps (int, optional, defaults to 12) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. timesteps (List[int], optional) β€”
Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps
timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 0.0) β€”
Guidance scale as defined in Classifier-Free Diffusion Guidance.
decoder_guidance_scale is defined as w of equation 2. of Imagen
Paper. Guidance scale is enabled by setting
decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely
linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) β€”
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
if decoder_guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) β€”
The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) β€”
One or a list of torch generator(s)
to make generation deterministic. latents (torch.FloatTensor, optional) β€”
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents