text stringlengths 0 5.54k |
|---|
google/flan-t5-large variant. projection_model (AudioLDM2ProjectionModel) β |
A trained model used to linearly project the hidden-states from the first and second text encoder models |
and insert learned SOS and EOS token embeddings. The projected hidden-states from the two text encoders are |
concatenated to give the input to the language model. language_model (GPT2Model) β |
An auto-regressive language model used to generate a sequence of hidden-states conditioned on the projected |
outputs from the two text encoders. tokenizer (RobertaTokenizer) β |
Tokenizer to tokenize text for the first frozen text-encoder. tokenizer_2 (T5Tokenizer) β |
Tokenizer to tokenize text for the second frozen text-encoder. feature_extractor (ClapFeatureExtractor) β |
Feature extractor to pre-process generated audio waveforms to log-mel spectrograms for automatic scoring. unet (UNet2DConditionModel) β |
A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) β |
A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) β |
Vocoder of class SpeechT5HifiGan to convert the mel-spectrogram latents to the final audio waveform. Pipeline for text-to-audio generation using AudioLDM2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods |
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 3.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 gene... |
The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) β |
The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) β |
The number of denoising steps. More denoising steps usually lead to a higher quality audio at the |
expense of slower inference. guidance_scale (float, optional, defaults to 3.5) β |
A higher guidance scale value encourages the model to generate audio that is closely linked to the text |
prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β |
The prompt or prompts to guide what to not include in audio generation. If not defined, you need to |
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) β |
The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, then automatic |
scoring is performed between the generated outputs and the text prompt. This scoring ranks the |
generated waveforms based on their cosine similarity with the text input in the joint text-audio |
embedding space. eta (float, optional, defaults to 0.0) β |
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for spectrogram |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not |
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If |
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, |
e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input |
argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text |
inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from |
negative_prompt input argument. attention_mask (torch.LongTensor, optional) β |
Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will |
be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) β |
Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention |
mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) β |
Number of new tokens to generate with the GPT2 language model. If not provided, number of tokens will |
be taken from the config of the model. return_dict (bool, optional, defaults to True) β |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. callback (Callable, optional) β |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. cross_attention_kwargs (dict, optional) β |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in |
self.processor. output_type (str, optional, defaults to "np") β |
The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or |
"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion |
model (LDM) output. Returns |
StableDiffusionPipelineOutput or tuple |
If return_dict is True, StableDiffusionPipelineOutput is returned, |
otherwise a tuple is returned where the first element is a list with the generated audio. |
The call function to the pipeline for generation. Examples: Copied >>> import scipy |
>>> import torch |
>>> from diffusers import AudioLDM2Pipeline |
>>> repo_id = "cvssp/audioldm2" |
>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) |
>>> pipe = pipe.to("cuda") |
>>> # define the prompts |
>>> prompt = "The sound of a hammer hitting a wooden surface." |
>>> negative_prompt = "Low quality." |
>>> # set the seed for generator |
>>> generator = torch.Generator("cuda").manual_seed(0) |
>>> # run the generation |
>>> audio = pipe( |
... prompt, |
... negative_prompt=negative_prompt, |
... num_inference_steps=200, |
... audio_length_in_s=10.0, |
... num_waveforms_per_prompt=3, |
... generator=generator, |
... ).audios |
>>> # save the best audio sample (index 0) as a .wav file |
>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0]) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to |
computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared |
to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward |
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with |
enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_waveforms_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optio... |
prompt to be encoded device (torch.device) β |
torch device num_waveforms_per_prompt (int) β |
number of waveforms that should be generated per prompt do_classifier_free_guidance (bool) β |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.