text stringlengths 0 5.54k |
|---|
The prompt or prompts not to guide the audio generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) β |
Pre-computed text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, e.g. |
prompt weighting. If not provided, text embeddings will be computed from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-computed negative text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, |
e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from |
negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, |
e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input |
argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text |
inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from |
negative_prompt input argument. attention_mask (torch.LongTensor, optional) β |
Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will |
be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) β |
Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention |
mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) β |
The number of new tokens to generate with the GPT2 language model. Returns |
prompt_embeds (torch.FloatTensor) |
Text embeddings from the Flan T5 model. |
attention_mask (torch.LongTensor): |
Attention mask to be applied to the prompt_embeds. |
generated_prompt_embeds (torch.FloatTensor): |
Text embeddings generated from the GPT2 langauge model. |
Encodes the prompt into text encoder hidden states. Example: Copied >>> import scipy |
>>> import torch |
>>> from diffusers import AudioLDM2Pipeline |
>>> repo_id = "cvssp/audioldm2" |
>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) |
>>> pipe = pipe.to("cuda") |
>>> # Get text embedding vectors |
>>> prompt_embeds, attention_mask, generated_prompt_embeds = pipe.encode_prompt( |
... prompt="Techno music with a strong, upbeat tempo and high melodic riffs", |
... device="cuda", |
... do_classifier_free_guidance=True, |
... ) |
>>> # Pass text embeddings to pipeline for text-conditional audio generation |
>>> audio = pipe( |
... prompt_embeds=prompt_embeds, |
... attention_mask=attention_mask, |
... generated_prompt_embeds=generated_prompt_embeds, |
... num_inference_steps=200, |
... audio_length_in_s=10.0, |
... ).audios[0] |
>>> # save generated audio sample |
>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) generate_language_model < source > ( inputs_embeds: Tensor = None max_new_tokens: int = 8 **model_kwargs ) β inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) Parameters inputs_embeds (torch.FloatTensor of shape (b... |
The sequence used as a prompt for the generation. max_new_tokens (int) β |
Number of new tokens to generate. model_kwargs (Dict[str, Any], optional) β |
Ad hoc parametrization of additional model-specific kwargs that will be forwarded to the forward |
function of the model. Returns |
inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) |
The sequence of generated hidden-states. |
Generates a sequence of hidden-states from the language model, conditioned on the embedding inputs. AudioLDM2ProjectionModel class diffusers.AudioLDM2ProjectionModel < source > ( text_encoder_dim text_encoder_1_dim langauge_model_dim ) Parameters text_encoder_dim (int) β |
Dimensionality of the text embeddings from the first text encoder (CLAP). text_encoder_1_dim (int) β |
Dimensionality of the text embeddings from the second text encoder (T5 or VITS). langauge_model_dim (int) β |
Dimensionality of the text embeddings from the language model (GPT2). A simple linear projection model to map two text embeddings to a shared latent space. It also inserts learned |
embedding vectors at the start and end of each text embedding sequence respectively. Each variable appended with |
_1 refers to that corresponding to the second text encoder. Otherwise, it is from the first. forward < source > ( hidden_states: Optional = None hidden_states_1: Optional = None attention_mask: Optional = None attention_mask_1: Optional = None ) AudioLDM2UNet2DConditionModel class diffusers.AudioLDM2UNet2DCondi... |
Height and width of input/output sample. in_channels (int, optional, defaults to 4) β Number of channels in the input sample. out_channels (int, optional, defaults to 4) β Number of channels in the output. flip_sin_to_cos (bool, optional, defaults to False) β |
Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) β The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) β |
The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") β |
Block type for middle of UNet, it can only be UNetMidBlock2DCrossAttn for AudioLDM2. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) β |
The tuple of upsample blocks to use. only_cross_attention (bool or Tuple[bool], optional, default to False) β |
Whether to include self-attention in the basic transformer blocks, see |
BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) β |
The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) β The number of layers per block. downsample_padding (int, optional, defaults to 1) β The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) β The scale factor to us... |
If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) β The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) β |
The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) β |
The number of transformer blocks of type BasicTransformerBlock. Only relevant for |
~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, |
~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. attention_head_dim (int, optional, defaults to 8) β The dimension of the attention heads. num_attention_heads (int, optional) β |
The number of attention heads. If not defined, defaults to attention_head_dim resnet_time_scale_shift (str, optional, defaults to "default") β Time scale shift config |
for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) β |
The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, |
"timestep", "identity", "projection", or "simple_projection". num_class_embeds (int, optional, defaults to None) β |
Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing |
class conditioning with class_embed_type equal to None. time_embedding_type (str, optional, defaults to positional) β |
The type of position embedding to use for timesteps. Choose from positional or fourier. time_embedding_dim (int, optional, defaults to None) β |
An optional override for the dimension of the projected time embedding. time_embedding_act_fn (str, optional, defaults to None) β |
Optional activation function to use only once on the time embeddings before they are passed to the rest of |
the UNet. Choose from silu, mish, gelu, and swish. timestep_post_act (str, optional, defaults to None) β |
The second activation function to use in timestep embedding. Choose from silu, mish and gelu. time_cond_proj_dim (int, optional, defaults to None) β |
The dimension of cond_proj layer in the timestep embedding. conv_in_kernel (int, optional, default to 3) β The kernel size of conv_in layer. conv_out_kernel (int, optional, default to 3) β The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, optional) β The dimension of the class_labels ... |
class_embed_type="projection". Required when class_embed_type="projection". class_embeddings_concat (bool, optional, defaults to False) β Whether to concatenate the time |
embeddings with the class embeddings. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample |
shaped output. Compared to the vanilla UNet2DConditionModel, this variant optionally includes an additional |
self-attention layer in each Transformer block, as well as multiple cross-attention layers. It also allows for up |
to two cross-attention embeddings, encoder_hidden_states and encoder_hidden_states_1. This model inherits from ModelMixin. Check the superclass documentation for itβs generic methods implemented |
for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None encoder_attention_mask: Optional = None return_dic... |
The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) β The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) β |
The encoder hidden states with shape (batch, sequence_length, feature_dim). encoder_attention_mask (torch.Tensor) β |
A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If |
True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.