text stringlengths 0 5.54k |
|---|
Whether the process calling this is the main process or not. Useful during distributed training and you |
need to call this function on all processes. In this case, set is_main_process=True only on the main |
process to avoid race conditions. save_function (Callable) — |
The function to use to save the state dictionary. Useful during distributed training when you need to |
replace torch.save with another method. Can be configured with the environment variable |
DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — |
Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — |
Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to |
mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — |
Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to |
mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values |
that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — |
prompt to be encoded |
device — (torch.device): |
torch device num_images_per_prompt (int) — |
number of images that should be generated per prompt do_classifier_free_guidance (bool) — |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. lora_scale (float, optional) — |
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, |
key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — |
generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — |
dimension of the embeddings to generate |
dtype — |
data type of the generated embeddings Returns |
torch.FloatTensor |
Embedding vectors with shape (len(timesteps), embedding_dim) |
See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — |
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — |
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or |
None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionImg2ImgPipeline class diffusers.FlaxStableDiffusionImg2ImgPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = <class 'jax.numpy.float32'> ) Parameters vae (FlaxAutoencoderKL) — |
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — |
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — |
A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — |
A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or |
FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — |
Classification module that estimates whether generated images could be considered offensive or harmful. |
Please refer to the model card for more details |
about a model’s potential harms. feature_extractor (CLIPImageProcessor) — |
A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods |
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array strength: float = 0.8 num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 noise: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — |
The prompt or prompts to guide image generation. image (jnp.ndarray) — |
Array representing an image batch to be used as the starting point. params (Dict or FrozenDict) — |
Dictionary containing the model parameters/weights. prng_seed (jax.Array or jax.Array) — |
Array containing random number generator key. strength (float, optional, defaults to 0.8) — |
Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a |
starting point and more noise is added the higher the strength. The number of denoising steps depends |
on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising |
process runs for the full number of iterations specified in num_inference_steps. A value of 1 |
essentially ignores image. num_inference_steps (int, optional, defaults to 50) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. This parameter is modulated by strength. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The width in pixels of the generated image. guidance_scale (float, optional, defaults to 7.5) — |
A higher guidance scale value encourages the model to generate images closely linked to the text |
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. noise (jnp.ndarray, optional) — |
Pre-generated noisy latents sampled from a Gaussian distribution to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. The array is generated by |
sampling using the supplied random generator. return_dict (bool, optional, defaults to True) — |
Whether or not to return a FlaxStableDiffusionPipelineOutput instead of |
a plain tuple. jit (bool, defaults to False) — |
Whether to run pmap versions of the generation and safety scoring functions. |
This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a |
future release. |
Returns |
FlaxStableDiffusionPipelineOutput or tuple |
If return_dict is True, FlaxStableDiffusionPipelineOutput is |
returned, otherwise a tuple is returned where the first element is a list with the generated images |
and the second element is a list of bools indicating whether the corresponding generated image |
contains “not-safe-for-work” (nsfw) content. |
The call function to the pipeline for generation. Examples: Copied >>> import jax |
>>> import numpy as np |
>>> import jax.numpy as jnp |
>>> from flax.jax_utils import replicate |
>>> from flax.training.common_utils import shard |
>>> import requests |
>>> from io import BytesIO |
>>> from PIL import Image |
>>> from diffusers import FlaxStableDiffusionImg2ImgPipeline |
>>> def create_key(seed=0): |
... return jax.random.PRNGKey(seed) |
>>> rng = create_key(0) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.