text
stringlengths
0
5.54k
Can be used to overwrite load and saveable variables (for example the pipeline components of the
specific pipeline class). The overwritten components are directly passed to the pipelines __init__
method. See example below for more information. Instantiate a AutoencoderKL from pretrained ControlNet weights saved in the original .ckpt or
.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Make sure to pass both image_size and scaling_factor to from_single_file() if you’re loading
a VAE from SDXL or a Stable Diffusion v2 model or higher. Examples: Copied from diffusers import AutoencoderKL
url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file
model = AutoencoderKL.from_single_file(url) FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlNetMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into a ControlNetModel. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) —
Can be either:
A link to the .ckpt file (for example
"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt") on the Hub.
A path to a file containing all pipeline weights.
torch_dtype (str or torch.dtype, optional) —
Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the
dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) —
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. resume_download (bool, optional, defaults to False) —
Whether or not to resume downloading the model weights and configuration files. If set to False, any
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) —
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) —
Whether to only load local model weights and configuration files or not. If set to True, the model
won’t be downloaded from the Hub. token (str or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True, the token generated from
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. use_safetensors (bool, optional, defaults to None) —
If set to None, the safetensors weights are downloaded if they’re available and if the
safetensors library is installed. If set to True, the model is forcibly loaded from safetensors
weights. If set to False, safetensors weights are not loaded. image_size (int, optional, defaults to 512) —
The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable
Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) —
Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) —
Can be used to overwrite load and saveable variables (for example the pipeline components of the
specific pipeline class). The overwritten components are directly passed to the pipelines __init__
method. See example below for more information. Instantiate a ControlNetModel from pretrained ControlNet weights saved in the original .ckpt or
.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path
model = ControlNetModel.from_single_file(url)
url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path
pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet)
Quicktour
Get up and running with 🧨 Diffusers quickly!
Whether you’re a developer or an everyday user, this quick tour will help you get started and show you how to use DiffusionPipeline for inference.
Before you begin, make sure you have all the necessary libraries installed:
Copied
pip install --upgrade diffusers accelerate transformers
accelerate speeds up model loading for inference and training
transformers is required to run the most popular diffusion models, such as Stable Diffusion
DiffusionPipeline
The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference. You can use the DiffusionPipeline out-of-the-box for many tasks across different modalities. Take a look at the table below for some supported tasks:
Task
Description
Pipeline
Unconditional Image Generation
generate an image from gaussian noise
unconditional_image_generation
Text-Guided Image Generation
generate an image given a text prompt
conditional_image_generation
Text-Guided Image-to-Image Translation
adapt an image guided by a text prompt
img2img
Text-Guided Image-Inpainting
fill the masked part of an image given the image, the mask and a text prompt
inpaint
Text-Guided Depth-to-Image Translation
adapt parts of an image guided by a text prompt while preserving structure via depth estimation
depth2image
For more in-detail information on how diffusion pipelines function for the different tasks, please have a look at the Using Diffusers section.
As an example, start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download.
You can use the DiffusionPipeline for any Diffusers’ checkpoint.
In this guide though, you’ll use DiffusionPipeline for text-to-image generation with Stable Diffusion.
For Stable Diffusion, please carefully read its license before running the model.
This is due to the improved image generation capabilities of the model and the potentially harmful content that could be produced with it.
Please, head over to your stable diffusion model of choice, e.g. runwayml/stable-diffusion-v1-5, and read the license.
You can load the model as follows:
Copied
>>> from diffusers import DiffusionPipeline
>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components.
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
You can move the generator object to GPU, just like you would in PyTorch.
Copied
>>> pipeline.to("cuda")
Now you can use the pipeline on your text prompt: