text
stringlengths
0
5.54k
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention
in several steps. This is useful to save some memory in exchange for a small speed decrease.
disable_attention_slicing
<
source
>
(
)
Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go
back to computing attention in one step.
enable_xformers_memory_efficient_attention
<
source
>
(
attention_op: typing.Optional[typing.Callable] = None
)
Parameters
attention_op (Callable, optional) —
Override the default None operator for use as op argument to the
memory_efficient_attention()
function of xFormers.
Enable memory efficient attention as implemented in xformers.
When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference
time. Speed up at training time is not guaranteed.
Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention
is used.
Examples:
Copied
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
disable_xformers_memory_efficient_attention
<
source
>
(
)
Disable memory efficient attention as implemented in xformers.
enable_sequential_cpu_offload
<
source
>
(
gpu_id = 0
)
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called.
MusicLDM MusicLDM was proposed in MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies by Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
MusicLDM takes a text prompt as input and predicts the corresponding music sample. Inspired by Stable Diffusion and AudioLDM,
MusicLDM is a text-to-music latent diffusion model (LDM) that learns continuous audio representations from CLAP
latents. MusicLDM is trained on a corpus of 466 hours of music data. Beat-synchronous data augmentation strategies are applied to the music samples, both in the time domain and in the latent space. Using beat-synchronous data augmentation strategies encourages the model to interpolate between the training samples, but stay within the domain of the training data. The result is generated music that is more diverse while staying faithful to the corresponding style. The abstract of the paper is the following: Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music. This pipeline was contributed by sanchit-gandhi. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific where possible (e.g. “melodic techno with a fast beat and synths” works better than “techno”). Using a negative prompt can significantly improve the quality of the generated audio. Try using a negative prompt of “low quality, average quality”. During inference: The quality of the generated audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1 to enable. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The length of the generated audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. MusicLDMPipeline class diffusers.MusicLDMPipeline < source > ( vae: AutoencoderKL text_encoder: Union tokenizer: Union feature_extractor: Optional unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) —
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) —
Frozen text-audio embedding model (ClapTextModel), specifically the
laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) —
A RobertaTokenizer to tokenize text. feature_extractor (ClapFeatureExtractor) —
Feature extractor to compute mel-spectrograms from audio waveforms. unet (UNet2DConditionModel) —
A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) —
A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) —
Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using MusicLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 2.0 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) —
The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) —
The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) —