text
stringlengths
0
5.54k
num_inference_steps = 25
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) The following Flax schedulers are not yet compatible with the Flax Stable Diffusion Pipeline: FlaxLMSDiscreteScheduler FlaxDDPMScheduler
IP-Adapter IP-Adapter is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs. Learn how to load an IP-Adapte...
Can be either:
A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on
the Hub.
A path to a directory (for example ./my_model_directory) containing the model weights saved
with ModelMixin.save_pretrained().
A torch state
dict.
cache_dir (Union[str, os.PathLike], optional) —
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. force_download (bool, optional, defaults to False) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. resume_download (bool, optional, defaults to False) —
Whether or not to resume downloading the model weights and configuration files. If set to False, any
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) —
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) —
Whether to only load local model weights and configuration files or not. If set to True, the model
won’t be downloaded from the Hub. token (str or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True, the token generated from
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. subfolder (str, optional, defaults to "") —
The subfolder location of a model file within a larger model repository on the Hub or locally. unload_ip_adapter < source > ( ) Unloads the IP Adapter weights Examples: Copied >>> # Assuming `pipeline` is already loaded with the IP Adapter weights.
>>> pipeline.unload_ip_adapter()
>>> ...
UNet2DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual dif...
Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). in_channels (int, optional, defaults to 3) — Number of channels in the input sample. out_channels (int, optional, defaults to 3) — Number of channels in the output. center_input_sample (bool, optional, de...
Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")) —
Tuple of downsample block types. mid_block_type (str, optional, defaults to "UNetMidBlock2D") —
Block type for middle of UNet, it can be either UNetMidBlock2D or UnCLIPUNetMidBlock2D. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")) —
Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (224, 448, 672, 896)) —
Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — The number of layers per block. mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. downsamp...
The downsample type for downsampling layers. Choose between “conv” and “resnet” upsample_type (str, optional, defaults to conv) —
The upsample type for upsampling layers. Choose between “conv” and “resnet” dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. attention_head_dim (int, optional, defaults to 8) — The attention head dimension. ...
If set to an integer, a group norm layer will be created in the mid block’s Attention layer with the
given number of groups. If left as None, the group norm layer will only be created if
resnet_time_scale_shift is set to default, and if created will have norm_num_groups groups. norm_eps (float, optional, defaults to 1e-5) — The epsilon for normalization. resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config
for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) —
The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None,
"timestep", or "identity". num_class_embeds (int, optional, defaults to None) —
Input dimension of the learnable embedding matrix to be projected to time_embed_dim when performing class
conditioning with class_embed_type equal to None. A 2D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented
for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union class_labels: Optional = None return_dict: bool = True ) → UNet2DOutput or tuple Parameters sample (torch.FloatTensor) —
The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. class_labels (torch.FloatTensor, optional, defaults to None) —
Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. return_dict (bool, optional, defaults to True) —
Whether or not to return a UNet2DOutput instead of a plain tuple. Returns
UNet2DOutput or tuple
If return_dict is True, an UNet2DOutput is returned, otherwise a tuple is
returned where the first element is the sample tensor.
The UNet2DModel forward method. UNet2DOutput class diffusers.models.unet_2d.UNet2DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
The hidden states output from the last layer of the model. The output of UNet2DModel.
Distributed inference with multiple GPUs On distributed setups, you can run inference across multiple GPUs with 🤗 Accelerate or PyTorch Distributed, which is useful for generating with multiple prompts in parallel. This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference. 🤗...
from accelerate import PartialState
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
)
distributed_state = PartialState()
pipeline.to(distributed_state.device)
with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt:
result = pipeline(prompt).images[0]
result.save(f"result_{distributed_state.process_index}.png") Use the --num_processes argument to specify the number of GPUs to use, and call accelerate launch to run the script: Copied accelerate launch run_distributed.py --num_processes=2 To learn more, take a look at the Distributed Inference with 🤗 Accelerate...
import torch.distributed as dist
import torch.multiprocessing as mp
from diffusers import DiffusionPipeline
sd = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
) You’ll want to create a function to run inference; init_process_group handles creating a distributed environment with the type of backend to use, the rank of the current process, and the world_size or the number of processes participating. If you’re running inference in parallel over 2 GPUs, then the world_size is 2....
dist.init_process_group("nccl", rank=rank, world_size=world_size)
sd.to(rank)
if torch.distributed.get_rank() == 0:
prompt = "a dog"
elif torch.distributed.get_rank() == 1:
prompt = "a cat"
image = sd(prompt).images[0]
image.save(f"./{'_'.join(prompt)}.png") To run the distributed inference, call mp.spawn to run the run_inference function on the number of GPUs defined in world_size: Copied def main():
world_size = 2
mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True)
if __name__ == "__main__":
main() Once you’ve completed the inference script, use the --nproc_per_node argument to specify the number of GPUs to use and call torchrun to run the script: Copied torchrun run_distributed.py --nproc_per_node=2