text stringlengths 0 5.54k |
|---|
A function that calls at the end of each denoising steps during the inference. The function is called |
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by |
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — |
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list |
will be passed as callback_kwargs argument. You will only be able to include variables listed in the |
._callback_tensor_inputs attribute of your pipeine class. Returns |
StableDiffusionPipelineOutput or tuple |
StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple |
containing the output images. |
Function invoked when calling the pipeline for generation. Examples: Copied >>> # pip install accelerate transformers safetensors diffusers |
>>> import torch |
>>> import numpy as np |
>>> from PIL import Image |
>>> from transformers import DPTFeatureExtractor, DPTForDepthEstimation |
>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL |
>>> from diffusers.utils import load_image |
>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda") |
>>> feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas") |
>>> controlnet = ControlNetModel.from_pretrained( |
... "diffusers/controlnet-depth-sdxl-1.0-small", |
... variant="fp16", |
... use_safetensors=True, |
... torch_dtype=torch.float16, |
... ).to("cuda") |
>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda") |
>>> pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained( |
... "stabilityai/stable-diffusion-xl-base-1.0", |
... controlnet=controlnet, |
... vae=vae, |
... variant="fp16", |
... use_safetensors=True, |
... torch_dtype=torch.float16, |
... ).to("cuda") |
>>> pipe.enable_model_cpu_offload() |
>>> def get_depth_map(image): |
... image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda") |
... with torch.no_grad(), torch.autocast("cuda"): |
... depth_map = depth_estimator(image).predicted_depth |
... depth_map = torch.nn.functional.interpolate( |
... depth_map.unsqueeze(1), |
... size=(1024, 1024), |
... mode="bicubic", |
... align_corners=False, |
... ) |
... depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) |
... depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) |
... depth_map = (depth_map - depth_min) / (depth_max - depth_min) |
... image = torch.cat([depth_map] * 3, dim=1) |
... image = image.permute(0, 2, 3, 1).cpu().numpy()[0] |
... image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) |
... return image |
>>> prompt = "A robot, 4k photo" |
>>> image = load_image( |
... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" |
... "/kandinsky/cat.png" |
... ).resize((1024, 1024)) |
>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization |
>>> depth_image = get_depth_map(image) |
>>> images = pipe( |
... prompt, |
... image=image, |
... control_image=depth_image, |
... strength=0.99, |
... num_inference_steps=50, |
... controlnet_conditioning_scale=controlnet_conditioning_scale, |
... ).images |
>>> images[0].save(f"robot_cat.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to |
computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to |
computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — |
Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to |
mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — |
Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to |
mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values |
that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to |
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow |
processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — |
prompt to be encoded prompt_2 (str or List[str], optional) — |
The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is |
used in both text-encoders |
device — (torch.device): |
torch device num_images_per_prompt (int) — |
number of images that should be generated per prompt do_classifier_free_guidance (bool) — |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). negative_prompt_2 (str or List[str], optional) — |
The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and |
text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.