text
stringlengths
0
5.54k
<
source
>
(
batch_size: int = 1
audio_file: str = None
raw_audio: ndarray = None
slice: int = 0
start_step: int = 0
steps: int = None
generator: Generator = None
mask_start_secs: float = 0
mask_end_secs: float = 0
step_generator: Generator = None
eta: float = 0
noise: Tensor = None
encoding: Tensor = None
return_dict = True
)
β†’
List[PIL Image]
Parameters
batch_size (int) β€” number of samples to generate
audio_file (str) β€” must be a file on disk due to Librosa limitation or
raw_audio (np.ndarray) β€” audio as numpy array
slice (int) β€” slice number of audio to convert
start_step (int) β€” step to start from
steps (int) β€” number of de-noising steps (defaults to 50 for DDIM, 1000 for DDPM)
generator (torch.Generator) β€” random number generator or None
mask_start_secs (float) β€” number of seconds of audio to mask (not generate) at start
mask_end_secs (float) β€” number of seconds of audio to mask (not generate) at end
step_generator (torch.Generator) β€” random number generator used to de-noise or None
eta (float) β€” parameter between 0 and 1 used with DDIM scheduler
noise (torch.Tensor) β€” noise tensor of shape (batch_size, 1, height, width) or None
encoding (torch.Tensor) β€” for UNet2DConditionModel shape (batch_size, seq_length, cross_attention_dim)
return_dict (bool) β€” if True return AudioPipelineOutput, ImagePipelineOutput else Tuple
Returns
List[PIL Image]
mel spectrograms (float, List[np.ndarray]): sample rate and raw audios
Generate random mel spectrogram from audio input and convert to audio.
encode
<
source
>
(
images: typing.List[PIL.Image.Image]
steps: int = 50
)
β†’
np.ndarray
Parameters
images (List[PIL Image]) β€” list of images to encode
steps (int) β€” number of encoding steps to perform (defaults to 50)
Returns