text
stringlengths
0
5.54k
won’t be downloaded from the Hub. token (str or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True, the token generated from
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. subfolder (str, optional, defaults to "") —
The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) —
Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and
Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("sd-concepts-library/cat-toy")
prompt = "A <cat-toy> backpack"
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")
prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) —
See lora_state_dict(). kwargs (dict, optional) —
See lora_state_dict(). adapter_name (str, optional) —
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and
self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into
self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded
into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters sa...
Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) —
State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) —
State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text
encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) —
Whether the process calling this is the main process or not. Useful during distributed training and you
need to call this function on all processes. In this case, set is_main_process=True only on the main
process to avoid race conditions. save_function (Callable) —
The function to use to save the state dictionary. Useful during distributed training when you need to
replace torch.save with another method. Can be configured with the environment variable
DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) —
Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prom...
prompt to be encoded
device — (torch.device):
torch device num_images_per_prompt (int) —
number of images that should be generated per prompt do_classifier_free_guidance (bool) —
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). prompt_embeds (torch.FloatTensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. lora_scale (float, optional) —
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters ...
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) —
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or
None if safety checking could not be performed. Output class for Stable Diffusion pipelines.
Overview 🧨 Diffusers offers many pipelines, models, and schedulers for generative tasks. To make loading these components as simple as possible, we provide a single and unified method - from_pretrained() - that loads any of these components from either the Hugging Face Hub or your local machine. Whenever you load a pi...
Pipelines
Pipelines provide a simple way to run state-of-the-art diffusion models in inference.
Most diffusion systems consist of multiple independently-trained models and highly adaptable scheduler
components - all of which are needed to have a functioning end-to-end diffusion system.
As an example, Stable Diffusion has three independently trained models:
Autoencoder
Conditional Unet
CLIP text encoder
a scheduler component, scheduler,
a CLIPFeatureExtractor,
as well as a safety checker.
All of these components are necessary to run stable diffusion in inference even though they were trained
or created independently from each other.
To that end, we strive to offer all open-sourced, state-of-the-art diffusion system under a unified API.
More specifically, we strive to provide pipelines that
can load the officially published weights and yield 1-to-1 the same outputs as the original implementation according to the corresponding paper (e.g. LDMTextToImagePipeline, uses the officially released weights of High-Resolution Image Synthesis with Latent Diffusion Models),
have a simple user interface to run the model in inference (see the Pipelines API section),
are easy to understand with code that is self-explanatory and can be read along-side the official paper (see Pipelines summary),
can easily be contributed by the community (see the Contribution section).
Note that pipelines do not (and should not) offer any training functionality.
If you are looking for official training examples, please have a look at examples.
🧨 Diffusers Summary
The following table summarizes all officially supported pipelines, their corresponding paper, and if
available a colab notebook to directly try them out.
Pipeline
Paper