text stringlengths 0 5.54k |
|---|
>>> from PIL import Image |
>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" |
>>> negative_prompt = "low quality, bad quality, sketches" |
>>> # download an image |
>>> image = load_image( |
... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" |
... ) |
>>> # initialize the models and pipeline |
>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization |
>>> controlnet = ControlNetModel.from_pretrained( |
... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 |
... ) |
>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) |
>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained( |
... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16 |
... ) |
>>> pipe.enable_model_cpu_offload() |
>>> # get canny image |
>>> image = np.array(image) |
>>> image = cv2.Canny(image, 100, 200) |
>>> image = image[:, :, None] |
>>> image = np.concatenate([image, image, image], axis=2) |
>>> canny_image = Image.fromarray(image) |
>>> # generate image |
>>> image = pipe( |
... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image |
... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to |
computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to |
computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — |
Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to |
mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — |
Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to |
mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values |
that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to |
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow |
processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — |
prompt to be encoded prompt_2 (str or List[str], optional) — |
The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is |
used in both text-encoders |
device — (torch.device): |
torch device num_images_per_prompt (int) — |
number of images that should be generated per prompt do_classifier_free_guidance (bool) — |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). negative_prompt_2 (str or List[str], optional) — |
The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and |
text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. pooled_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. |
If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt |
input argument. lora_scale (float, optional) — |
A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — |
generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — |
dimension of the embeddings to generate |
dtype — |
data type of the generated embeddings Returns |
torch.FloatTensor |
Embedding vectors with shape (len(timesteps), embedding_dim) |
See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLControlNetImg2ImgPipeline class diffusers.StableDiffusionXLControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — |
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — |
Frozen text-encoder. Stable Diffusion uses the text portion of |
CLIP, specifically |
the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — |
Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of |
CLIP, |
specifically the |
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k |
variant. tokenizer (CLIPTokenizer) — |
Tokenizer of class |
CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — |
Second Tokenizer of class |
CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — |
Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets |
as a list, the outputs from each ControlNet are added together to create one combined additional |
conditioning. scheduler (SchedulerMixin) — |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — |
Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the |
config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — |
Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of |
stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — |
Whether to use the invisible_watermark library to |
watermark output images. If not defined, it will default to True if the package is installed, otherwise no |
watermarker will be used. Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.