text
stringlengths
0
5.54k
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. resume_download (bool, optional, defaults to False) β€”
Whether or not to resume downloading the model weights and configuration files. If set to False, any
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) β€”
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) β€”
Whether to only load local model weights and configuration files or not. If set to True, the model
won’t be downloaded from the Hub. token (str or bool, optional) β€”
The token to use as HTTP bearer authorization for remote files. If True, the token generated from
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") β€”
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. subfolder (str, optional, defaults to "") β€”
The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) β€”
Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both πŸ€— Diffusers and
Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in πŸ€— Diffusers format: Copied from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("sd-concepts-library/cat-toy")
prompt = "A <cat-toy> backpack"
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")
prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], option...
prompt to be encoded
device β€” (torch.device):
torch device num_images_per_prompt (int) β€”
number of images that should be generated per prompt do_classifier_free_guidance (bool) β€”
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β€”
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. lora_scale (float, optional) β€”
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β€”
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) β†’ torch.FloatTensor Parameters timesteps (torch.Tensor) β€”
generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) β€”
dimension of the embeddings to generate
dtype β€”
data type of the generated embeddings Returns
torch.FloatTensor
Embedding vectors with shape (len(timesteps), embedding_dim)
See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionControlNetImg2ImgPipeline class diffusers.StableDiffusionControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionMode...
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β€”
A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) β€”
Provides additional conditioning to the unet during the denoising process. If you set multiple
ControlNets as a list, the outputs from each ControlNet are added together to create one combined
additional conditioning. scheduler (SchedulerMixin) β€”
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) β€”
Classification module that estimates whether generated images could be considered offensive or harmful.
Please refer to the model card for more details
about a model’s potential harms. feature_extractor (CLIPImageProcessor) β€”
A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file...
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], β€”
List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]):
The initial image to be used as the starting point for the image generation process. Can also accept
image latents as image, and if passing latents directly they are not encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], β€”
List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]):
The ControlNet input condition to provide guidance to the unet for generation. If the type is
specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be
accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height
and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in
init, images must be passed as a list such that each element of the list can be correctly batched for
input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The width in pixels of the generated image. strength (float, optional, defaults to 0.8) β€”
Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a
starting point and more noise is added the higher the strength. The number of denoising steps depends
on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising
process runs for the full number of iterations specified in num_inference_steps. A value of 1
essentially ignores image. num_inference_steps (int, optional, defaults to 50) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
A higher guidance scale value encourages the model to generate images closely linked to the text
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β€”
The prompt or prompts to guide what to not include in image generation. If not defined, you need to