text
stringlengths
0
5.54k
url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file
model = AutoencoderKL.from_single_file(url) FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlnetMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into a ControlNetModel. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) β€”
Can be either:
A link to the .ckpt file (for example
"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt") on the Hub.
A path to a file containing all pipeline weights.
torch_dtype (str or torch.dtype, optional) β€”
Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the
dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) β€”
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) β€”
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. resume_download (bool, optional, defaults to False) β€”
Whether or not to resume downloading the model weights and configuration files. If set to False, any
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) β€”
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) β€”
Whether to only load local model weights and configuration files or not. If set to True, the model
won’t be downloaded from the Hub. token (str or bool, optional) β€”
The token to use as HTTP bearer authorization for remote files. If True, the token generated from
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") β€”
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. use_safetensors (bool, optional, defaults to None) β€”
If set to None, the safetensors weights are downloaded if they’re available and if the
safetensors library is installed. If set to True, the model is forcibly loaded from safetensors
weights. If set to False, safetensors weights are not loaded. image_size (int, optional, defaults to 512) β€”
The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable
Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) β€”
Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) β€”
Can be used to overwrite load and saveable variables (for example the pipeline components of the
specific pipeline class). The overwritten components are directly passed to the pipelines __init__
method. See example below for more information. Instantiate a ControlNetModel from pretrained ControlNet weights saved in the original .ckpt or
.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path
model = ControlNetModel.from_single_file(url)
url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path
pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet)
DDIM Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10Γ— to 50Γ— faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase can be found at ermongroup/ddim. DDIMPipeline class diffusers.DDIMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) β€”
A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
A scheduler to be used in combination with unet to denoise the encoded image. Can be one of
DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None eta: float = 0.0 num_inference_steps: int = 50 use_clipped_model_output: Optional = None output_type: Optional = 'pil' return_dict: bool = True ) β†’ ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) β€”
The number of images to generate. generator (torch.Generator, optional) β€”
A torch.Generator to make
generation deterministic. eta (float, optional, defaults to 0.0) β€”
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
to the DDIMScheduler, and is ignored in other schedulers. A value of 0 corresponds to
DDIM and 1 corresponds to DDPM. num_inference_steps (int, optional, defaults to 50) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. use_clipped_model_output (bool, optional, defaults to None) β€”
If True or False, see documentation for DDIMScheduler.step(). If None, nothing is passed
downstream to the scheduler (use None for schedulers which don’t support this argument). output_type (str, optional, defaults to "pil") β€”
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns
ImagePipelineOutput or tuple
If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is
returned where the first element is a list with the generated images
The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDIMPipeline
>>> import PIL.Image
>>> import numpy as np
>>> # load model and scheduler
>>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom")
>>> # run pipeline in inference (sample random noise and denoise)
>>> image = pipe(eta=0.0, num_inference_steps=50)
>>> # process image to PIL
>>> image_processed = image.cpu().permute(0, 2, 3, 1)
>>> image_processed = (image_processed + 1.0) * 127.5
>>> image_processed = image_processed.numpy().astype(np.uint8)
>>> image_pil = PIL.Image.fromarray(image_processed[0])
>>> # save image
>>> image_pil.save("test.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) β€”
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines.
IPNDMScheduler IPNDMScheduler is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at crowsonkb/v-diffusion-pytorch. IPNDMScheduler class diffusers.IPNDMScheduler < source > ( num_train_timesteps: int = 1000 trained_betas: Union = None ) Parameters num_train_timesteps (int, defaults to 1000) β€”
The number of diffusion steps to train the model. trained_betas (np.ndarray, optional) β€”
Pass an array of betas directly to the constructor to bypass beta_start and beta_end. A fourth-order Improved Pseudo Linear Multistep scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) β†’ torch.FloatTensor Parameters sample (torch.FloatTensor) β€”
The input sample. Returns
torch.FloatTensor
A scaled input sample.
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) β€”
The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) β€”
The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) β†’ SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β€”
The direct output from learned diffusion model. timestep (int) β€”
The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β€”
A current instance of a sample created by the diffusion process. return_dict (bool) β€”
Whether or not to return a SchedulerOutput or tuple. Returns
SchedulerOutput or tuple
If return_dict is True, SchedulerOutput is returned, otherwise a
tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the linear multistep method. It performs one forward pass multiple times to approximate the solution. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€”