text
stringlengths
0
5.54k
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) —
Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) —
A function that calls every callback_steps steps during inference. The function is called with the
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) —
The frequency at which the callback function is called. If not specified, the callback is called at
every step. cross_attention_kwargs (dict, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
self.processor. output_type (str, optional, defaults to "np") —
The output format of the generated image. Choose between "np" to return a NumPy np.ndarray or
"pt" to return a PyTorch torch.Tensor object. Returns
AudioPipelineOutput or tuple
If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is
returned where the first element is a list with the generated audio.
The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AudioLDMPipeline
>>> import torch
>>> import scipy
>>> repo_id = "cvssp/audioldm-s-full-v2"
>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs"
>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0]
>>> # save the audio sample as a .wav file
>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) —
List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines.
The Stable Diffusion Guide 🎨
Intro ...
Stable Diffusion is a Latent Diffusion model developed by researchers from the Machine Vision and Learning group at LMU Munich, a.k.a CompVis.
Model checkpoints were publicly released at the end of August 2022 by a collaboration of Stability AI, CompVis, and Runway with support from EleutherAI and LAION. For more information, you can check out the official blog post. ...
Since its public release the community has done an incredible job at working together to make the stable diffusion checkpoints faster, more memory efficient, and more performant. ...
🧨 Diffusers offers a simple API to run stable diffusion with all memory, computing, and quality improvements.
This notebook walks you through the improvements one-by-one so you can best leverage StableDiffusionPipeline for inference.
Prompt Engineering 🎨
When running *Stable Diffusion* in inference, we usually want to generate a certain type, or style of image and then improve upon it. Improving upon a previously generated image means running inference over and over again with a different prompt and potentially a different seed until we are happy with our generation.
So to begin with, it is most important to speed up stable diffusion as much as possible to generate as many pictures as possible in a given amount of time.
This can be done by both improving the computational efficiency (speed) and the memory efficiency (GPU RAM).
Let’s start by looking into computational efficiency first.
Throughout the notebook, we will focus on runwayml/stable-diffusion-v1-5:
Copied
model_id = "runwayml/stable-diffusion-v1-5"
Let’s load the pipeline.
Speed Optimization ...
Copied
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(model_id)
We aim at generating a beautiful photograph of an old warrior chief and will later try to find the best prompt to generate such a photograph. For now, let’s keep the prompt simple: ...
Copied
prompt = "portrait photo of a old warrior chief" ...
To begin with, we should make sure we run inference on GPU, so let’s move the pipeline to GPU, just like you would with any PyTorch module.
Copied
pipe = pipe.to("cuda") ...
To generate an image, you should use the [~StableDiffusionPipeline.__call__] method.
To make sure we can reproduce more or less the same image in every call, let’s make use of the generator. See the documentation on reproducibility here for more information. ...
Copied
generator = torch.Generator("cuda").manual_seed(0)
Now, let’s take a spin on it.
Copied
image = pipe(prompt, generator=generator).images[0]
image
Cool, this now took roughly 30 seconds on a T4 GPU (you might see faster inference if your allocated GPU is better than a T4).
The default run we did above used full float32 precision and ran the default number of inference steps (50). The easiest speed-ups come from switching to float16 (or half) precision and simply running fewer inference steps. Let’s load the model now in float16 instead. ...
Copied
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
And we can again call the pipeline to generate an image.