text stringlengths 0 5.54k |
|---|
https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — |
crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position |
crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting |
crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — |
For most cases, target_size should be set to the desired height and width of the generated image. If |
not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in |
section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — |
Used to simulate an aesthetic score of the generated image by influencing the positive text condition. |
Part of SDXL’s micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — |
Part of SDXL’s micro-conditioning as explained in section 2.2 of |
https://huggingface.co/papers/2307.01952. Can be used to |
simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — |
A function that calls at the end of each denoising steps during the inference. The function is called |
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by |
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — |
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list |
will be passed as callback_kwargs argument. You will only be able to include variables listed in the |
._callback_tensor_inputs attribute of your pipeine class. Returns |
~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple |
~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a |
tuple. tuple. When returning a tuple, the first element is a list with the generated images. |
Function invoked when calling the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate |
>>> from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel, DDIMScheduler |
>>> from diffusers.utils import load_image |
>>> import numpy as np |
>>> import torch |
>>> init_image = load_image( |
... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" |
... ) |
>>> init_image = init_image.resize((1024, 1024)) |
>>> generator = torch.Generator(device="cpu").manual_seed(1) |
>>> mask_image = load_image( |
... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" |
... ) |
>>> mask_image = mask_image.resize((1024, 1024)) |
>>> def make_canny_condition(image): |
... image = np.array(image) |
... image = cv2.Canny(image, 100, 200) |
... image = image[:, :, None] |
... image = np.concatenate([image, image, image], axis=2) |
... image = Image.fromarray(image) |
... return image |
>>> control_image = make_canny_condition(init_image) |
>>> controlnet = ControlNetModel.from_pretrained( |
... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 |
... ) |
>>> pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained( |
... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16 |
... ) |
>>> pipe.enable_model_cpu_offload() |
>>> # generate image |
>>> image = pipe( |
... "a handsome man with ray-ban sunglasses", |
... num_inference_steps=20, |
... generator=generator, |
... eta=1.0, |
... image=init_image, |
... mask_image=mask_image, |
... control_image=control_image, |
... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to |
computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to |
computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — |
Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to |
mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — |
Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to |
mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values |
that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to |
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow |
processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — |
prompt to be encoded prompt_2 (str or List[str], optional) — |
The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is |
used in both text-encoders |
device — (torch.device): |
torch device num_images_per_prompt (int) — |
number of images that should be generated per prompt do_classifier_free_guidance (bool) — |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). negative_prompt_2 (str or List[str], optional) — |
The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and |
text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.