Buckets:
AudioLDM
AudioLDM was proposed in AudioLDM: Text-to-Audio Generation with Latent Diffusion Models by Haohe Liu et al. Inspired by Stable Diffusion, AudioLDM is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from CLAP latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music.
The abstract from the paper is:
Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at this https URL.
The original codebase can be found at haoheliu/AudioLDM.
Tips
When constructing a prompt, keep in mind:
- Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, "high quality" or "clear") and make the prompt context specific (for example, "water stream in a forest" instead of "stream").
- It's best to use general terms like "cat" or "dog" instead of specific names or abstract objects the model may not be familiar with.
During inference:
- The quality of the predicted audio sample can be controlled by the
num_inference_stepsargument; higher steps give higher quality audio at the expense of slower inference. - The length of the predicted audio sample can be controlled by varying the
audio_length_in_sargument.
Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.
AudioLDMPipeline[[diffusers.AudioLDMPipeline]]
diffusers.AudioLDMPipeline[[diffusers.AudioLDMPipeline]]
Pipeline for text-to-audio generation using AudioLDM.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
__call__diffusers.AudioLDMPipeline.__call__https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/audioldm/pipeline_audioldm.py#L360[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "audio_length_in_s", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": int = 10"}, {"name": "guidance_scale", "val": ": float = 2.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_waveforms_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": typing.Optional[int] = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}]- prompt (str or List[str], optional) --
The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds.
- audio_length_in_s (
int, optional, defaults to 5.12) -- The length of the generated audio sample in seconds. - num_inference_steps (
int, optional, defaults to 10) -- The number of denoising steps. More denoising steps usually lead to a higher quality audio at the expense of slower inference. - guidance_scale (
float, optional, defaults to 2.5) -- A higher guidance scale value encourages the model to generate audio that is closely linked to the textpromptat the expense of lower sound quality. Guidance scale is enabled whenguidance_scale > 1. - negative_prompt (
strorList[str], optional) -- The prompt or prompts to guide what to not include in audio generation. If not defined, you need to passnegative_prompt_embedsinstead. Ignored when not using guidance (guidance_scale 0[AudioPipelineOutput](/docs/diffusers/pr_11739/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) ortupleIfreturn_dictisTrue, [AudioPipelineOutput](/docs/diffusers/pr_11739/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) is returned, otherwise atuple` is returned where the first element is a list with the generated audio.
The call function to the pipeline for generation.
Examples:
>>> from diffusers import AudioLDMPipeline
>>> import torch
>>> import scipy
>>> repo_id = "cvssp/audioldm-s-full-v2"
>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs"
>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0]
>>> # save the audio sample as a .wav file
>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio)
Parameters:
vae (AutoencoderKL) : Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
text_encoder (ClapTextModelWithProjection) : Frozen text-encoder (ClapTextModelWithProjection, specifically the laion/clap-htsat-unfused variant.
tokenizer (PreTrainedTokenizer) : A RobertaTokenizer to tokenize text.
unet (UNet2DConditionModel) : A UNet2DConditionModel to denoise the encoded audio latents.
scheduler (SchedulerMixin) : A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
vocoder (SpeechT5HifiGan) : Vocoder of class SpeechT5HifiGan.
Returns:
[AudioPipelineOutput](/docs/diffusers/pr_11739/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) or tuple``
If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is
returned where the first element is a list with the generated audio.
AudioPipelineOutput[[diffusers.AudioPipelineOutput]]
diffusers.AudioPipelineOutput[[diffusers.AudioPipelineOutput]]
Output class for audio pipelines.
Parameters:
audios (np.ndarray) : List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate).
Xet Storage Details
- Size:
- 9.18 kB
- Xet hash:
- c71ed53dbbc0c5897bd6bad28231060fbf76fd0b0d7535718ce31ec23b303a27
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.