text
stringlengths
0
5.54k
pipeline.to(distributed_state.device)
with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt:
result = pipeline(prompt).images[0]
result.save(f"result_{distributed_state.process_index}.png") Use the --num_processes argument to specify the number of GPUs to use, and call accelerate launch to run the script: Copied accelerate launch run_distributed.py --num_processes=2 To learn more, take a look at the Distributed Inference with 🤗 Accelerate...
import torch.distributed as dist
import torch.multiprocessing as mp
from diffusers import DiffusionPipeline
sd = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
) You’ll want to create a function to run inference; init_process_group handles creating a distributed environment with the type of backend to use, the rank of the current process, and the world_size or the number of processes participating. If you’re running inference in parallel over 2 GPUs, then the world_size is 2....
dist.init_process_group("nccl", rank=rank, world_size=world_size)
sd.to(rank)
if torch.distributed.get_rank() == 0:
prompt = "a dog"
elif torch.distributed.get_rank() == 1:
prompt = "a cat"
image = sd(prompt).images[0]
image.save(f"./{'_'.join(prompt)}.png") To run the distributed inference, call mp.spawn to run the run_inference function on the number of GPUs defined in world_size: Copied def main():
world_size = 2
mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True)
if __name__ == "__main__":
main() Once you’ve completed the inference script, use the --nproc_per_node argument to specify the number of GPUs to use and call torchrun to run the script: Copied torchrun run_distributed.py --nproc_per_node=2
Load different Stable Diffusion formats Stable Diffusion models are available in different formats depending on the framework they’re trained and saved with, and where you download them from. Converting these formats for use in 🤗 Diffusers allows you to use all the features supported by the library, such as...
git clone https://huggingface.co/CiaraRowles/TemporalNet Open a pull request on the repository where you’re converting the checkpoint from: Copied cd TemporalNet && git fetch origin refs/pr/13:pr/13
git checkout pr/13 There are several input arguments to configure in the conversion script, but the most important ones are: checkpoint_path: the path to the .ckpt file to convert. original_config_file: a YAML file defining the configuration of the original architecture. If you can’t find this file, try searching for t...
optimization techniques. The Convert KerasCV Space converts .pb or .h5 files to PyTorch, and then wraps them in a StableDiffusionPipeline so it is ready for inference. The converted checkpoint is stored in a repository on the Hugging Face Hub. For this example, let’s convert the sayakpaul/textual-inversion-kerasio chec...
pipeline = DiffusionPipeline.from_pretrained(
"sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True
) Then, you can generate an image like: Copied from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained(
"sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True
)
pipeline.to("cuda")
placeholder_token = "<my-funny-cat-token>"
prompt = f"two {placeholder_token} getting married, photorealistic, high quality"
image = pipeline(prompt, num_inference_steps=50).images[0] A1111 LoRA files Automatic1111 (A1111) is a popular web UI for Stable Diffusion that supports model sharing platforms like Civitai. Models trained with the Low-Rank Adaptation (LoRA) technique are especially popular because they’re fast to train and have a muc...
import torch
pipeline = StableDiffusionXLPipeline.from_pretrained(
"Lykon/dreamshaper-xl-1-0", torch_dtype=torch.float16, variant="fp16"
).to("cuda") Download a LoRA checkpoint from Civitai; this example uses the Blueprintify SD XL 1.0 checkpoint, but feel free to try out any LoRA checkpoint! Copied # uncomment to download the safetensor weights
#!wget https://civitai.com/api/download/models/168776 -O blueprintify.safetensors Load the LoRA checkpoint into the pipeline with the load_lora_weights() method: Copied pipeline.load_lora_weights(".", weight_name="blueprintify.safetensors") Now you can use the pipeline to generate images: Copied prompt = "bl3uprint...
negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture"
image = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
generator=torch.manual_seed(0),
).images[0]
image
ScoreSdeVpScheduler ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. ...
The number of diffusion steps to train the model. beta_min (int, defaults to 0.1) — beta_max (int, defaults to 20) — sampling_eps (int, defaults to 1e-3) —
The end value of sampling where timesteps decrease progressively from 1 to epsilon. ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving. set_timesteps < source > ( num_inference_steps device: Union = None ) Parameters num_inference_steps (int) —
The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) —
The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_pred < source > ( score x t generator = None ) Parameters score () — x () — t () — generator (torch.Generator, optiona...
A random number generator. Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).
Configuration Schedulers from SchedulerMixin and models from ModelMixin inherit from ConfigMixin which stores all the parameters that are passed to their respective __init__ methods in a JSON-configuration file. To use private or gated models, log-in with huggingface-cli login. ConfigMixin class diffusers.ConfigMixin...
provides the from_config() and save_config() methods for loading, downloading, and
saving classes that inherit from ConfigMixin. Class attributes: config_name (str) — A filename under which the config should stored when calling
save_config() (should be overridden by parent class). ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be
overridden by subclass). has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). _deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function
should only have a kwargs argument if at least one argument is deprecated (should be overridden by
subclass). load_config < source > ( pretrained_model_name_or_path: Union return_unused_kwargs = False return_commit_hash = False **kwargs ) → dict Parameters pretrained_model_name_or_path (str or os.PathLike, optional) —
Can be either:
A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on
the Hub.
A path to a directory (for example ./my_model_directory) containing model weights saved with
save_config().
cache_dir (Union[str, os.PathLike], optional) —
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. force_download (bool, optional, defaults to False) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. resume_download (bool, optional, defaults to False) —
Whether or not to resume downloading the model weights and configuration files. If set to False, any
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) —
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) —
Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) —
Whether to only load local model weights and configuration files or not. If set to True, the model
won’t be downloaded from the Hub. token (str or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True, the token generated from
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. subfolder (str, optional, defaults to "") —
The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to `False) —
Whether unused keyword arguments of the config are returned. return_commit_hash (bool, optional, defaults to False) -- Whether the commit_hash` of the loaded configuration are returned. Returns
dict