text stringlengths 0 5.54k |
|---|
Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. |
OpenVINO 🤗 Optimum provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the full list of supported devices). You’ll need to install 🤗 Optimum Intel with the --upgrade-strategy eager option to ensure optimum-intel is using the latest version: Copied pip install --upgrade-strategy eager optimum["openvino"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with OpenVINO. Stable Diffusion To load and run inference, use the OVStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, set export=True: Copied from optimum.intel import OVStableDiffusionPipeline |
model_id = "runwayml/stable-diffusion-v1-5" |
pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) |
prompt = "sailing ship in storm by Rembrandt" |
image = pipeline(prompt).images[0] |
# Don't forget to save the exported model |
pipeline.save_pretrained("openvino-sd-v1-5") To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, you’ll need to statically reshape your model again. Copied # Define the shapes related to the inputs and desired outputs |
batch_size, num_images, height, width = 1, 1, 512, 512 |
# Statically reshape the model |
pipeline.reshape(batch_size, height, width, num_images) |
# Compile the model before inference |
pipeline.compile() |
image = pipeline( |
prompt, |
height=height, |
width=width, |
num_images_per_prompt=num_images, |
).images[0] You can find more examples in the 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the OVStableDiffusionXLPipeline: Copied from optimum.intel import OVStableDiffusionXLPipeline |
model_id = "stabilityai/stable-diffusion-xl-base-1.0" |
pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id) |
prompt = "sailing ship in storm by Rembrandt" |
image = pipeline(prompt).images[0] To further speed-up inference, statically reshape the model as shown in the Stable Diffusion section. You can find more examples in the 🤗 Optimum documentation, and running SDXL in OpenVINO is supported for text-to-image and image-to-image. |
Single files Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a ckpt or safetensors file. These single file types are typically produced from community trained models. There are three classes for loading single file weights: FromSingleFileMixin supports loading pretrained pipeline weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalVAEMixin supports loading a pretrained AutoencoderKL from pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalControlnetMixin supports loading pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. To learn more about how to load single file weights, see the Load different Stable Diffusion formats loading guide. FromSingleFileMixin class diffusers.loaders.FromSingleFileMixin < source > ( ) Load model weights saved in the .ckpt format into a DiffusionPipeline. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — |
Can be either: |
A link to the .ckpt file (for example |
"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt") on the Hub. |
A path to a file containing all pipeline weights. |
torch_dtype (str or torch.dtype, optional) — |
Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — |
Whether or not to force the (re-)download of the model weights and configuration files, overriding the |
cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — |
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache |
is not used. resume_download (bool, optional, defaults to False) — |
Whether or not to resume downloading the model weights and configuration files. If set to False, any |
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — |
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — |
Whether to only load local model weights and configuration files or not. If set to True, the model |
won’t be downloaded from the Hub. token (str or bool, optional) — |
The token to use as HTTP bearer authorization for remote files. If True, the token generated from |
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — |
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier |
allowed by Git. use_safetensors (bool, optional, defaults to None) — |
If set to None, the safetensors weights are downloaded if they’re available and if the |
safetensors library is installed. If set to True, the model is forcibly loaded from safetensors |
weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors |
format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline |
>>> # Download pipeline from huggingface.co and cache. |
>>> pipeline = StableDiffusionPipeline.from_single_file( |
... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" |
... ) |
>>> # Download pipeline from local file |
>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt |
>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") |
>>> # Enable float16 and move to GPU |
>>> pipeline = StableDiffusionPipeline.from_single_file( |
... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", |
... torch_dtype=torch.float16, |
... ) |
>>> pipeline.to("cuda") FromOriginalVAEMixin class diffusers.loaders.FromOriginalVAEMixin < source > ( ) Load pretrained AutoencoderKL weights saved in the .ckpt or .safetensors format into a AutoencoderKL. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — |
Can be either: |
A link to the .ckpt file (for example |
"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt") on the Hub. |
A path to a file containing all pipeline weights. |
config_file (str, optional) — |
Filepath to the configuration YAML file associated with the model. If not provided it will default to: |
https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml torch_dtype (str or torch.dtype, optional) — |
Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the |
dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — |
Whether or not to force the (re-)download of the model weights and configuration files, overriding the |
cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — |
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache |
is not used. resume_download (bool, optional, defaults to False) — |
Whether or not to resume downloading the model weights and configuration files. If set to False, any |
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — |
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — |
Whether to only load local model weights and configuration files or not. If set to True, the model |
won’t be downloaded from the Hub. token (str or bool, optional) — |
The token to use as HTTP bearer authorization for remote files. If True, the token generated from |
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — |
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier |
allowed by Git. image_size (int, optional, defaults to 512) — |
The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable |
Diffusion v2 base model. Use 768 for Stable Diffusion v2. scaling_factor (float, optional, defaults to 0.18215) — |
The component-wise standard deviation of the trained latent space computed using the first batch of the |
training set. This is used to scale the latent space to have unit variance when training the diffusion |
model. The latents are scaled with the formula z = z * scaling_factor before being passed to the |
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution |
Image Synthesis with Latent Diffusion Models paper. use_safetensors (bool, optional, defaults to None) — |
If set to None, the safetensors weights are downloaded if they’re available and if the |
safetensors library is installed. If set to True, the model is forcibly loaded from safetensors |
weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.