text stringlengths 0 5.54k |
|---|
(luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the |
expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — |
A higher guidance scale value encourages the model to generate images closely linked to the text |
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — |
The prompt or prompts to guide what to not include in image generation. If not defined, you need to |
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — |
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — |
Corresponds to parameter eta (η) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) — |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — |
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. callback (Callable, optional) — |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. Returns |
StableDiffusionPipelineOutput or tuple |
If return_dict is True, StableDiffusionPipelineOutput is returned, |
otherwise a tuple is returned where the first element is a list with the generated images and the |
second element is a list of bools indicating whether the corresponding generated image contains |
“not-safe-for-work” (nsfw) content. |
The call function to the pipeline for generation. Example: Copied >>> import PIL |
>>> import requests |
>>> import torch |
>>> from io import BytesIO |
>>> from diffusers import PaintByExamplePipeline |
>>> def download_image(url): |
... response = requests.get(url) |
... return PIL.Image.open(BytesIO(response.content)).convert("RGB") |
>>> img_url = ( |
... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" |
... ) |
>>> mask_url = ( |
... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" |
... ) |
>>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" |
>>> init_image = download_image(img_url).resize((512, 512)) |
>>> mask_image = download_image(mask_url).resize((512, 512)) |
>>> example_image = download_image(example_url).resize((512, 512)) |
>>> pipe = PaintByExamplePipeline.from_pretrained( |
... "Fantasy-Studio/Paint-by-Example", |
... torch_dtype=torch.float16, |
... ) |
>>> pipe = pipe.to("cuda") |
>>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] |
>>> image StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — |
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — |
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or |
None if safety checking could not be performed. Output class for Stable Diffusion pipelines. |
Metal Performance Shaders (MPS) 🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch mps device, which uses the Metal framework to leverage the GPU on MacOS devices. You’ll need to have: macOS computer with Apple silicon (M1/M2) hardware macOS 12.6 or later (13.0 or later recommended) arm64 version of Python PyTorch 2.0 (recommended) or 1.13 (minimum version supported for mps) The mps backend uses PyTorch’s .to() interface to move the Stable Diffusion pipeline on to your M1 or M2 device: Copied from diffusers import DiffusionPipeline |
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") |
pipe = pipe.to("mps") |
# Recommended if your computer has < 64 GB of RAM |
pipe.enable_attention_slicing() |
prompt = "a photo of an astronaut riding a horse on mars" |
image = pipe(prompt).images[0] |
image Generating multiple prompts in a batch can crash or fail to work reliably. We believe this is related to the mps backend in PyTorch. While this is being investigated, you should iterate instead of batching. If you’re using PyTorch 1.13, you need to “prime” the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result. Copied from diffusers import DiffusionPipeline |
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("mps") |
pipe.enable_attention_slicing() |
prompt = "a photo of an astronaut riding a horse on mars" |
# First-time "warmup" pass if PyTorch version is 1.13 |
+ _ = pipe(prompt, num_inference_steps=1) |
# Results match those from the CPU device after the warmup pass. |
image = pipe(prompt).images[0] Troubleshoot M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance. To prevent this from happening, we recommend attention slicing to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the enable_attention_slicing() function on your pipeline: Copied from diffusers import DiffusionPipeline |
import torch |
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps") |
pipeline.enable_attention_slicing() Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we’ve observed better performance in most Apple silicon computers unless you have 64GB of RAM or more. |
Understanding pipelines, models and schedulers 🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. In this tutorial, you’ll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. Deconstruct a basic pipeline A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: Copied >>> from diffusers import DDPMPipeline |
>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") |
>>> image = ddpm(num_inference_steps=25).images[0] |
>>> image That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. In the example above, the pipeline contains a UNet2DModel model and a DDPMScheduler. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the noise residual and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. To recreate the pipeline with the model and scheduler separately, let’s write our own denoising process. Load the model and scheduler: Copied >>> from diffusers import DDPMScheduler, UNet2DModel |
>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") |
>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") Set the number of timesteps to run the denoising process for: Copied >>> scheduler.set_timesteps(50) Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you’ll iterate over this tensor to denoise an image: Copied >>> scheduler.timesteps |
tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.