code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def forward(
self,
x: torch.Tensor,
x_mask: torch.Tensor,
offset: Union[int, torch.Tensor] = 0
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Input x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): ... | Input x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): Input mask (#batch, 1, time).
Returns:
torch.Tensor: linear input tensor (#batch, time', odim),
where time' = time .
torch.Tensor: linear input mas... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/subsampling.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/subsampling.py | MIT |
def __init__(
self,
input_size: int,
output_size: int = 256,
attention_heads: int = 4,
linear_units: int = 2048,
num_blocks: int = 6,
dropout_rate: float = 0.1,
positional_dropout_rate: float = 0.1,
attention_dropout_rate: float = 0.0,
inpu... |
Args:
input_size (int): input dim
output_size (int): dimension of attention
attention_heads (int): the number of heads of multi head attention
linear_units (int): the hidden units number of position-wise feed
forward
num_blocks (int): ... | __init__ | python | abus-aikorea/voice-pro | cosyvoice/transformer/upsample_encoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/upsample_encoder.py | MIT |
def forward(
self,
xs: torch.Tensor,
xs_lens: torch.Tensor,
decoding_chunk_size: int = 0,
num_decoding_left_chunks: int = -1,
) -> Tuple[torch.Tensor, torch.Tensor]:
"""Embed positions in tensor.
Args:
xs: padded input tensor (B, T, D)
... | Embed positions in tensor.
Args:
xs: padded input tensor (B, T, D)
xs_lens: input length (B)
decoding_chunk_size: decoding chunk size for dynamic chunk
0: default for training, use random dynamic chunk.
<0: for decoding, use full chunk.
... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/upsample_encoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/upsample_encoder.py | MIT |
def pad_list(xs: List[torch.Tensor], pad_value: int):
"""Perform padding for the list of tensors.
Args:
xs (List): List of Tensors [(T_1, `*`), (T_2, `*`), ..., (T_B, `*`)].
pad_value (float): Value for padding.
Returns:
Tensor: Padded tensor (B, Tmax, `*`).
Examples:
... | Perform padding for the list of tensors.
Args:
xs (List): List of Tensors [(T_1, `*`), (T_2, `*`), ..., (T_B, `*`)].
pad_value (float): Value for padding.
Returns:
Tensor: Padded tensor (B, Tmax, `*`).
Examples:
>>> x = [torch.ones(4), torch.ones(2), torch.ones(1)]
... | pad_list | python | abus-aikorea/voice-pro | cosyvoice/utils/common.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/utils/common.py | MIT |
def th_accuracy(pad_outputs: torch.Tensor, pad_targets: torch.Tensor,
ignore_label: int) -> torch.Tensor:
"""Calculate accuracy.
Args:
pad_outputs (Tensor): Prediction tensors (B * Lmax, D).
pad_targets (LongTensor): Target label tensors (B, Lmax).
ignore_label (int): Ig... | Calculate accuracy.
Args:
pad_outputs (Tensor): Prediction tensors (B * Lmax, D).
pad_targets (LongTensor): Target label tensors (B, Lmax).
ignore_label (int): Ignore label id.
Returns:
torch.Tensor: Accuracy value (0.0 - 1.0).
| th_accuracy | python | abus-aikorea/voice-pro | cosyvoice/utils/common.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/utils/common.py | MIT |
def subsequent_mask(
size: int,
device: torch.device = torch.device("cpu"),
) -> torch.Tensor:
"""Create mask for subsequent steps (size, size).
This mask is used only in decoder which works in an auto-regressive mode.
This means the current step could only do attention with its left steps.... | Create mask for subsequent steps (size, size).
This mask is used only in decoder which works in an auto-regressive mode.
This means the current step could only do attention with its left steps.
In encoder, fully attention is used when streaming is not necessary and
the sequence is not long. In this c... | subsequent_mask | python | abus-aikorea/voice-pro | cosyvoice/utils/mask.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/utils/mask.py | MIT |
def subsequent_chunk_mask_deprecated(
size: int,
chunk_size: int,
num_left_chunks: int = -1,
device: torch.device = torch.device("cpu"),
) -> torch.Tensor:
"""Create mask for subsequent steps (size, size) with chunk size,
this is for streaming encoder
Args:
size (... | Create mask for subsequent steps (size, size) with chunk size,
this is for streaming encoder
Args:
size (int): size of mask
chunk_size (int): size of chunk
num_left_chunks (int): number of left chunks
<0: use full chunk
>=0: use num_left_chunks
device ... | subsequent_chunk_mask_deprecated | python | abus-aikorea/voice-pro | cosyvoice/utils/mask.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/utils/mask.py | MIT |
def subsequent_chunk_mask(
size: int,
chunk_size: int,
num_left_chunks: int = -1,
device: torch.device = torch.device("cpu"),
) -> torch.Tensor:
"""Create mask for subsequent steps (size, size) with chunk size,
this is for streaming encoder
Args:
size (int): size ... | Create mask for subsequent steps (size, size) with chunk size,
this is for streaming encoder
Args:
size (int): size of mask
chunk_size (int): size of chunk
num_left_chunks (int): number of left chunks
<0: use full chunk
>=0: use num_left_chunks
device ... | subsequent_chunk_mask | python | abus-aikorea/voice-pro | cosyvoice/utils/mask.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/utils/mask.py | MIT |
def add_optional_chunk_mask(xs: torch.Tensor,
masks: torch.Tensor,
use_dynamic_chunk: bool,
use_dynamic_left_chunk: bool,
decoding_chunk_size: int,
static_chunk_size: int,
... | Apply optional mask for encoder.
Args:
xs (torch.Tensor): padded input, (B, L, D), L for max length
mask (torch.Tensor): mask for xs, (B, 1, L)
use_dynamic_chunk (bool): whether to use dynamic chunk or not
use_dynamic_left_chunk (bool): whether to use dynamic left chunk for
... | add_optional_chunk_mask | python | abus-aikorea/voice-pro | cosyvoice/utils/mask.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/utils/mask.py | MIT |
def make_pad_mask(lengths: torch.Tensor, max_len: int = 0) -> torch.Tensor:
"""Make mask tensor containing indices of padded part.
See description of make_non_pad_mask.
Args:
lengths (torch.Tensor): Batch of lengths (B,).
Returns:
torch.Tensor: Mask tensor containing indices of padded ... | Make mask tensor containing indices of padded part.
See description of make_non_pad_mask.
Args:
lengths (torch.Tensor): Batch of lengths (B,).
Returns:
torch.Tensor: Mask tensor containing indices of padded part.
Examples:
>>> lengths = [5, 3, 2]
>>> make_pad_mask(leng... | make_pad_mask | python | abus-aikorea/voice-pro | cosyvoice/utils/mask.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/utils/mask.py | MIT |
def __init__(self,
optimizer,
*,
max_steps,
decay_rate=0.5,
min_lr=0.0,
last_epoch=-1,
**kwargs):
"""
From Nemo:
Implementation of the Noam Hold Annealing policy
from th... |
From Nemo:
Implementation of the Noam Hold Annealing policy
from the SqueezeFormer paper.
Unlike NoamAnnealing, the peak learning rate
can be explicitly set for this scheduler.
The schedule first performs linear warmup,
then holds the peak LR, then decays with s... | __init__ | python | abus-aikorea/voice-pro | cosyvoice/utils/scheduler.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/utils/scheduler.py | MIT |
def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_class, interactive=True):
"""
Copied from https://github.com/AUTOMATIC1111/stable-diffusion-webui
"""
def refresh():
refresh_method()
args = refreshed_args() if callable(refreshed_args) else refreshed_args
... |
Copied from https://github.com/AUTOMATIC1111/stable-diffusion-webui
| create_refresh_button | python | abus-aikorea/voice-pro | src/ui.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/ui.py | MIT |
def __init__(self, threshold=0.5, frame_rate=16000):
"""
Initializes the VoiceActivityDetector with a voice activity detection model and a threshold.
Args:
threshold (float, optional): The probability threshold for detecting voice activity. Defaults to 0.5.
"""
self.... |
Initializes the VoiceActivityDetector with a voice activity detection model and a threshold.
Args:
threshold (float, optional): The probability threshold for detecting voice activity. Defaults to 0.5.
| __init__ | python | abus-aikorea/voice-pro | src/vad.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/vad.py | MIT |
def __call__(self, audio_frame):
"""
Determines if the given audio frame contains speech by comparing the detected speech probability against
the threshold.
Args:
audio_frame (np.ndarray): The audio frame to be analyzed for voice activity. It is expected to be a
... |
Determines if the given audio frame contains speech by comparing the detected speech probability against
the threshold.
Args:
audio_frame (np.ndarray): The audio frame to be analyzed for voice activity. It is expected to be a
NumPy array of aud... | __call__ | python | abus-aikorea/voice-pro | src/vad.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/vad.py | MIT |
def __init__(
self,
model: str = "htdemucs",
repo: Optional[Path] = None,
device: str = "cuda" if th.cuda.is_available() else "cpu",
shifts: int = 1,
overlap: float = 0.25,
split: bool = True,
segment: Optional[int] = None,
jobs: int = 0,
p... |
`class Separator`
=================
Parameters
----------
model: Pretrained model name or signature. Default is htdemucs.
repo: Folder containing all pre-trained models for use.
segment: Length (in seconds) of each segment (only available if `split` is `True`). ... | __init__ | python | abus-aikorea/voice-pro | src/demucs/api.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/api.py | MIT |
def update_parameter(
self,
device: Union[str, _NotProvided] = NotProvided,
shifts: Union[int, _NotProvided] = NotProvided,
overlap: Union[float, _NotProvided] = NotProvided,
split: Union[bool, _NotProvided] = NotProvided,
segment: Optional[Union[int, _NotProvided]] = Not... |
Update the parameters of separation.
Parameters
----------
segment: Length (in seconds) of each segment (only available if `split` is `True`). If not specified, will use the command line option.
shifts: If > 0, will shift in time `wav` by a random amount between 0 a... | update_parameter | python | abus-aikorea/voice-pro | src/demucs/api.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/api.py | MIT |
def separate_tensor(
self, wav: th.Tensor, sr: Optional[int] = None
) -> Tuple[th.Tensor, Dict[str, th.Tensor]]:
"""
Separate a loaded tensor.
Parameters
----------
wav: Waveform of the audio. Should have 2 dimensions, the first is each audio channel, \
w... |
Separate a loaded tensor.
Parameters
----------
wav: Waveform of the audio. Should have 2 dimensions, the first is each audio channel, while the second is the waveform of each channel. Type should be float32. e.g. `tuple(wav.shape) == (2, 884000)` means the audi... | separate_tensor | python | abus-aikorea/voice-pro | src/demucs/api.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/api.py | MIT |
def list_models(repo: Optional[Path] = None) -> Dict[str, Dict[str, Union[str, Path]]]:
"""
List the available models. Please remember that not all the returned models can be
successfully loaded.
Parameters
----------
repo: The repo whose models are to be listed.
Returns
-------
A ... |
List the available models. Please remember that not all the returned models can be
successfully loaded.
Parameters
----------
repo: The repo whose models are to be listed.
Returns
-------
A dict with two keys ("single" for single models and "bag" for bag of models). The values are
... | list_models | python | abus-aikorea/voice-pro | src/demucs/api.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/api.py | MIT |
def __init__(self, models: tp.List[Model],
weights: tp.Optional[tp.List[tp.List[float]]] = None,
segment: tp.Optional[float] = None):
"""
Represents a bag of models with specific weights.
You should call `apply_model` rather than calling directly the forward her... |
Represents a bag of models with specific weights.
You should call `apply_model` rather than calling directly the forward here for
optimal performance.
Args:
models (list[nn.Module]): list of Demucs/HDemucs models.
weights (list[list[float]]): list of weights. If... | __init__ | python | abus-aikorea/voice-pro | src/demucs/apply.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/apply.py | MIT |
def apply_model(model: tp.Union[BagOfModels, Model],
mix: tp.Union[th.Tensor, TensorChunk],
shifts: int = 1, split: bool = True,
overlap: float = 0.25, transition_power: float = 1.,
progress: bool = False, device=None,
num_workers: int = 0,... |
Apply model to a given mixture.
Args:
shifts (int): if > 0, will shift in time `mix` by a random amount between 0 and 0.5 sec
and apply the oppositve shift to the output. This is repeated `shifts` time and
all predictions are averaged. This effectively makes the model time equi... | apply_model | python | abus-aikorea/voice-pro | src/demucs/apply.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/apply.py | MIT |
def read(self,
seek_time=None,
duration=None,
streams=slice(None),
samplerate=None,
channels=None):
"""
Slightly more efficient implementation than stempeg,
in particular, this will extract all stems at once
rather than hav... |
Slightly more efficient implementation than stempeg,
in particular, this will extract all stems at once
rather than having to loop over one file multiple times
for each stream.
Args:
seek_time (float): seek time in seconds or None if no seeking is needed.
... | read | python | abus-aikorea/voice-pro | src/demucs/audio.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/audio.py | MIT |
def convert_audio_channels(wav, channels=2):
"""Convert audio to the given number of channels."""
*shape, src_channels, length = wav.shape
if src_channels == channels:
pass
elif channels == 1:
# Case 1:
# The caller asked 1-channel audio, but the stream have multiple
# ch... | Convert audio to the given number of channels. | convert_audio_channels | python | abus-aikorea/voice-pro | src/demucs/audio.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/audio.py | MIT |
def convert_audio(wav, from_samplerate, to_samplerate, channels) -> torch.Tensor:
"""Convert audio from a given samplerate to a target one and target number of channels."""
wav = convert_audio_channels(wav, channels)
return julius.resample_frac(wav, from_samplerate, to_samplerate) | Convert audio from a given samplerate to a target one and target number of channels. | convert_audio | python | abus-aikorea/voice-pro | src/demucs/audio.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/audio.py | MIT |
def i16_pcm(wav):
"""Convert audio to 16 bits integer PCM format."""
if wav.dtype.is_floating_point:
return (wav.clamp_(-1, 1) * (2**15 - 1)).short()
else:
return wav | Convert audio to 16 bits integer PCM format. | i16_pcm | python | abus-aikorea/voice-pro | src/demucs/audio.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/audio.py | MIT |
def f32_pcm(wav):
"""Convert audio to float 32 bits PCM format."""
if wav.dtype.is_floating_point:
return wav
else:
return wav.float() / (2**15 - 1) | Convert audio to float 32 bits PCM format. | f32_pcm | python | abus-aikorea/voice-pro | src/demucs/audio.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/audio.py | MIT |
def as_dtype_pcm(wav, dtype):
"""Convert audio to either f32 pcm or i16 pcm depending on the given dtype."""
if wav.dtype.is_floating_point:
return f32_pcm(wav)
else:
return i16_pcm(wav) | Convert audio to either f32 pcm or i16 pcm depending on the given dtype. | as_dtype_pcm | python | abus-aikorea/voice-pro | src/demucs/audio.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/audio.py | MIT |
def encode_mp3(wav, path, samplerate=44100, bitrate=320, quality=2, verbose=False):
"""Save given audio as mp3. This should work on all OSes."""
C, T = wav.shape
wav = i16_pcm(wav)
encoder = lameenc.Encoder()
encoder.set_bit_rate(bitrate)
encoder.set_in_sample_rate(samplerate)
encoder.set_ch... | Save given audio as mp3. This should work on all OSes. | encode_mp3 | python | abus-aikorea/voice-pro | src/demucs/audio.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/audio.py | MIT |
def prevent_clip(wav, mode='rescale'):
"""
different strategies for avoiding raw clipping.
"""
if mode is None or mode == 'none':
return wav
assert wav.dtype.is_floating_point, "too late for clipping"
if mode == 'rescale':
wav = wav / max(1.01 * wav.abs().max(), 1)
elif mode ... |
different strategies for avoiding raw clipping.
| prevent_clip | python | abus-aikorea/voice-pro | src/demucs/audio.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/audio.py | MIT |
def save_audio(wav: torch.Tensor,
path: tp.Union[str, Path],
samplerate: int,
bitrate: int = 320,
clip: tp.Literal["rescale", "clamp", "tanh", "none"] = 'rescale',
bits_per_sample: tp.Literal[16, 24, 32] = 16,
as_float: bool = Fal... | Save audio file, automatically preventing clipping if necessary
based on the given `clip` strategy. If the path ends in `.mp3`, this
will save as mp3 with the given `bitrate`. Use `preset` to set mp3 quality:
2 for highest quality, 7 for fastest speed
| save_audio | python | abus-aikorea/voice-pro | src/demucs/audio.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/audio.py | MIT |
def __init__(self, proba=1, group_size=4):
"""
Shuffle sources within one batch.
Each batch is divided into groups of size `group_size` and shuffling is done within
each group separatly. This allow to keep the same probability distribution no matter
the number of GPUs. Without th... |
Shuffle sources within one batch.
Each batch is divided into groups of size `group_size` and shuffling is done within
each group separatly. This allow to keep the same probability distribution no matter
the number of GPUs. Without this grouping, using more GPUs would lead to a higher
... | __init__ | python | abus-aikorea/voice-pro | src/demucs/augment.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/augment.py | MIT |
def rescale_conv(conv, reference):
"""Rescale initial weight scale. It is unclear why it helps but it certainly does.
"""
std = conv.weight.std().detach()
scale = (std / reference)**0.5
conv.weight.data /= scale
if conv.bias is not None:
conv.bias.data /= scale | Rescale initial weight scale. It is unclear why it helps but it certainly does.
| rescale_conv | python | abus-aikorea/voice-pro | src/demucs/demucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/demucs.py | MIT |
def __init__(self, channels: int, compress: float = 4, depth: int = 2, init: float = 1e-4,
norm=True, attn=False, heads=4, ndecay=4, lstm=False, gelu=True,
kernel=3, dilate=True):
"""
Args:
channels: input/output channels for residual branch.
com... |
Args:
channels: input/output channels for residual branch.
compress: amount of channel compression inside the branch.
depth: number of layers in the residual branch. Each layer has its own
projection, and potentially LSTM and attention.
init: init... | __init__ | python | abus-aikorea/voice-pro | src/demucs/demucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/demucs.py | MIT |
def __init__(self,
sources,
# Channels
audio_channels=2,
channels=64,
growth=2.,
# Main structure
depth=6,
rewrite=True,
lstm_layers=0,
# Convolutions... |
Args:
sources (list[str]): list of source names
audio_channels (int): stereo or mono
channels (int): first convolution channels
depth (int): number of encoder/decoder layers
growth (float): multiply (resp divide) number of channels by that
... | __init__ | python | abus-aikorea/voice-pro | src/demucs/demucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/demucs.py | MIT |
def valid_length(self, length):
"""
Return the nearest valid length to use with the model so that
there is no time steps left over in a convolution, e.g. for all
layers, size of the input - kernel_size % stride = 0.
Note that input are automatically padded if necessary to ensure... |
Return the nearest valid length to use with the model so that
there is no time steps left over in a convolution, e.g. for all
layers, size of the input - kernel_size % stride = 0.
Note that input are automatically padded if necessary to ensure that the output
has the same lengt... | valid_length | python | abus-aikorea/voice-pro | src/demucs/demucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/demucs.py | MIT |
def loader(dataset, *args, shuffle=False, klass=DataLoader, **kwargs):
"""
Create a dataloader properly in case of distributed training.
If a gradient is going to be computed you must set `shuffle=True`.
"""
if world_size == 1:
return klass(dataset, *args, shuffle=shuffle, **kwargs)
if ... |
Create a dataloader properly in case of distributed training.
If a gradient is going to be computed you must set `shuffle=True`.
| loader | python | abus-aikorea/voice-pro | src/demucs/distrib.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/distrib.py | MIT |
def evaluate(solver, compute_sdr=False):
"""
Evaluate model using museval.
compute_sdr=False means using only the MDX definition of the SDR, which
is much faster to evaluate.
"""
args = solver.args
output_dir = solver.folder / "results"
output_dir.mkdir(exist_ok=True, parents=True)
... |
Evaluate model using museval.
compute_sdr=False means using only the MDX definition of the SDR, which
is much faster to evaluate.
| evaluate | python | abus-aikorea/voice-pro | src/demucs/evaluate.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/evaluate.py | MIT |
def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.):
"""Tiny wrapper around F.pad, just to allow for reflect padding on small input.
If this is the case, we insert extra 0 padding to the right before the reflection happen."""
x0 = x
length = x.shape[-1]
... | Tiny wrapper around F.pad, just to allow for reflect padding on small input.
If this is the case, we insert extra 0 padding to the right before the reflection happen. | pad1d | python | abus-aikorea/voice-pro | src/demucs/hdemucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/hdemucs.py | MIT |
def __init__(self, chin, chout, kernel_size=8, stride=4, norm_groups=1, empty=False,
freq=True, dconv=True, norm=True, context=0, dconv_kw={}, pad=True,
rewrite=True):
"""Encoder layer. This used both by the time and the frequency branch.
Args:
chin: number... | Encoder layer. This used both by the time and the frequency branch.
Args:
chin: number of input channels.
chout: number of output channels.
norm_groups: number of groups for group norm.
empty: used to make a layer with just the first conv. this is used
... | __init__ | python | abus-aikorea/voice-pro | src/demucs/hdemucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/hdemucs.py | MIT |
def forward(self, x, inject=None):
"""
`inject` is used to inject the result from the time branch into the frequency branch,
when both have the same stride.
"""
if not self.freq and x.dim() == 4:
B, C, Fr, T = x.shape
x = x.view(B, -1, T)
if not s... |
`inject` is used to inject the result from the time branch into the frequency branch,
when both have the same stride.
| forward | python | abus-aikorea/voice-pro | src/demucs/hdemucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/hdemucs.py | MIT |
def __init__(self, layer, split_ratios):
"""
Args:
layer: module to clone, must be either HEncLayer or HDecLayer.
split_ratios: list of float indicating which ratio to keep for each band.
"""
super().__init__()
self.split_ratios = split_ratios
self... |
Args:
layer: module to clone, must be either HEncLayer or HDecLayer.
split_ratios: list of float indicating which ratio to keep for each band.
| __init__ | python | abus-aikorea/voice-pro | src/demucs/hdemucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/hdemucs.py | MIT |
def __init__(self, chin, chout, last=False, kernel_size=8, stride=4, norm_groups=1, empty=False,
freq=True, dconv=True, norm=True, context=1, dconv_kw={}, pad=True,
context_freq=True, rewrite=True):
"""
Same as HEncLayer but for decoder. See `HEncLayer` for documentatio... |
Same as HEncLayer but for decoder. See `HEncLayer` for documentation.
| __init__ | python | abus-aikorea/voice-pro | src/demucs/hdemucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/hdemucs.py | MIT |
def __init__(self,
sources,
# Channels
audio_channels=2,
channels=48,
channels_time=None,
growth=2,
# STFT
nfft=4096,
wiener_iters=0,
end_iters=0,
... |
Args:
sources (list[str]): list of source names.
audio_channels (int): input/output audio channels.
channels (int): initial number of hidden channels.
channels_time: if not None, use a different `channels` value for the time branch.
growth: increase t... | __init__ | python | abus-aikorea/voice-pro | src/demucs/hdemucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/hdemucs.py | MIT |
def __init__(
self,
sources,
# Channels
audio_channels=2,
channels=48,
channels_time=None,
growth=2,
# STFT
nfft=4096,
wiener_iters=0,
end_iters=0,
wiener_residual=False,
cac=True,
# Main structure
de... |
Args:
sources (list[str]): list of source names.
audio_channels (int): input/output audio channels.
channels (int): initial number of hidden channels.
channels_time: if not None, use a different `channels` value for the time branch.
growth: increase t... | __init__ | python | abus-aikorea/voice-pro | src/demucs/htdemucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/htdemucs.py | MIT |
def valid_length(self, length: int):
"""
Return a length that is appropriate for evaluation.
In our case, always return the training length, unless
it is smaller than the given length, in which case this
raises an error.
"""
if not self.use_train_segment:
... |
Return a length that is appropriate for evaluation.
In our case, always return the training length, unless
it is smaller than the given length, in which case this
raises an error.
| valid_length | python | abus-aikorea/voice-pro | src/demucs/htdemucs.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/htdemucs.py | MIT |
def get_model(name: str,
repo: tp.Optional[Path] = None):
"""`name` must be a bag of models name or a pretrained signature
from the remote AWS model repo or the specified local repo if `repo` is not None.
"""
if name == 'demucs_unittest':
return demucs_unittest()
model_repo: Mo... | `name` must be a bag of models name or a pretrained signature
from the remote AWS model repo or the specified local repo if `repo` is not None.
| get_model | python | abus-aikorea/voice-pro | src/demucs/pretrained.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/pretrained.py | MIT |
def get_model_from_args(args):
"""
Load local model package or pre-trained model.
"""
if args.name is None:
args.name = DEFAULT_MODEL
print(bold("Important: the default model was recently changed to `htdemucs`"),
"the latest Hybrid Transformer Demucs model. In some cases, t... |
Load local model package or pre-trained model.
| get_model_from_args | python | abus-aikorea/voice-pro | src/demucs/pretrained.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/pretrained.py | MIT |
def repitch(wav, pitch, tempo, voice=False, quick=False, samplerate=44100):
"""
tempo is a relative delta in percentage, so tempo=10 means tempo at 110%!
pitch is in semi tones.
Requires `soundstretch` to be installed, see
https://www.surina.net/soundtouch/soundstretch.html
"""
infile = temp... |
tempo is a relative delta in percentage, so tempo=10 means tempo at 110%!
pitch is in semi tones.
Requires `soundstretch` to be installed, see
https://www.surina.net/soundtouch/soundstretch.html
| repitch | python | abus-aikorea/voice-pro | src/demucs/repitch.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/repitch.py | MIT |
def _reset(self):
"""Reset state of the solver, potentially using checkpoint."""
if self.checkpoint_file.exists():
logger.info(f'Loading checkpoint model: {self.checkpoint_file}')
package = torch.load(self.checkpoint_file, 'cpu')
self.model.load_state_dict(package['st... | Reset state of the solver, potentially using checkpoint. | _reset | python | abus-aikorea/voice-pro | src/demucs/solver.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/solver.py | MIT |
def get_quantizer(model, args, optimizer=None):
"""Return the quantizer given the XP quantization args."""
quantizer = None
if args.diffq:
_check_diffq()
from diffq import DiffQuantizer
quantizer = DiffQuantizer(
model, min_size=args.min_size, group_size=args.group_size)
... | Return the quantizer given the XP quantization args. | get_quantizer | python | abus-aikorea/voice-pro | src/demucs/states.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/states.py | MIT |
def load_model(path_or_package, strict=False):
"""Load a model from the given serialized model, either given as a dict (already loaded)
or a path to a file on disk."""
if isinstance(path_or_package, dict):
package = path_or_package
elif isinstance(path_or_package, (str, Path)):
with warn... | Load a model from the given serialized model, either given as a dict (already loaded)
or a path to a file on disk. | load_model | python | abus-aikorea/voice-pro | src/demucs/states.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/states.py | MIT |
def get_state(model, quantizer, half=False):
"""Get the state from a model, potentially with quantization applied.
If `half` is True, model are stored as half precision, which shouldn't impact performance
but half the state size."""
if quantizer is None:
dtype = torch.half if half else None
... | Get the state from a model, potentially with quantization applied.
If `half` is True, model are stored as half precision, which shouldn't impact performance
but half the state size. | get_state | python | abus-aikorea/voice-pro | src/demucs/states.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/states.py | MIT |
def set_state(model, state, quantizer=None):
"""Set the state on a given model."""
if state.get('__quantized'):
if quantizer is not None:
quantizer.restore_quantized_state(model, state['quantized'])
else:
_check_diffq()
from diffq import restore_quantized_stat... | Set the state on a given model. | set_state | python | abus-aikorea/voice-pro | src/demucs/states.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/states.py | MIT |
def save_with_checksum(content, path):
"""Save the given value on disk, along with a sha256 hash.
Should be used with the output of either `serialize_model` or `get_state`."""
buf = io.BytesIO()
torch.save(content, buf)
sig = hashlib.sha256(buf.getvalue()).hexdigest()[:8]
path = path.parent / (... | Save the given value on disk, along with a sha256 hash.
Should be used with the output of either `serialize_model` or `get_state`. | save_with_checksum | python | abus-aikorea/voice-pro | src/demucs/states.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/states.py | MIT |
def swap_state(model, state):
"""
Context manager that swaps the state of a model, e.g:
# model is in old state
with swap_state(model, new_state):
# model in new state
# model back to old state
"""
old_state = copy_state(model.state_dict())
model.load_state_dict(... |
Context manager that swaps the state of a model, e.g:
# model is in old state
with swap_state(model, new_state):
# model in new state
# model back to old state
| swap_state | python | abus-aikorea/voice-pro | src/demucs/states.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/states.py | MIT |
def power_iteration(m, niters=1, bs=1):
"""This is the power method. batch size is used to try multiple starting point in parallel."""
assert m.dim() == 2
assert m.shape[0] == m.shape[1]
dim = m.shape[0]
b = torch.randn(dim, bs, device=m.device, dtype=m.dtype)
for _ in range(niters):
n ... | This is the power method. batch size is used to try multiple starting point in parallel. | power_iteration | python | abus-aikorea/voice-pro | src/demucs/svd.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/svd.py | MIT |
def svd_penalty(model, min_size=0.1, dim=1, niters=2, powm=False, convtr=True,
proba=1, conv_only=False, exact=False, bs=1):
"""
Penalty on the largest singular value for a layer.
Args:
- model: model to penalize
- min_size: minimum size in MB of a layer to penalize.
... |
Penalty on the largest singular value for a layer.
Args:
- model: model to penalize
- min_size: minimum size in MB of a layer to penalize.
- dim: projection dimension for the svd_lowrank. Higher is better but slower.
- niters: number of iterations in the algorithm used by svd_lo... | svd_penalty | python | abus-aikorea/voice-pro | src/demucs/svd.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/svd.py | MIT |
def create_2d_sin_embedding(d_model, height, width, device="cpu", max_period=10000):
"""
:param d_model: dimension of the model
:param height: height of the positions
:param width: width of the positions
:return: d_model*height*width position matrix
"""
if d_model % 4 != 0:
raise Val... |
:param d_model: dimension of the model
:param height: height of the positions
:param width: width of the positions
:return: d_model*height*width position matrix
| create_2d_sin_embedding | python | abus-aikorea/voice-pro | src/demucs/transformer.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/transformer.py | MIT |
def get_elementary_mask(
T1,
T2,
mask_type,
sparse_attn_window,
global_window,
mask_random_seed,
sparsity,
device,
):
"""
When the input of the Decoder has length T1 and the output T2
The mask matrix has shape (T2, T1)
"""
assert mask_type in ["diag", "jmask", "random... |
When the input of the Decoder has length T1 and the output T2
The mask matrix has shape (T2, T1)
| get_elementary_mask | python | abus-aikorea/voice-pro | src/demucs/transformer.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/transformer.py | MIT |
def get_mask(
T1,
T2,
mask_type,
sparse_attn_window,
global_window,
mask_random_seed,
sparsity,
device,
):
"""
Return a SparseCSRTensor mask that is a combination of elementary masks
mask_type can be a combination of multiple masks: for instance "diag_jmask_random"
"""
... |
Return a SparseCSRTensor mask that is a combination of elementary masks
mask_type can be a combination of multiple masks: for instance "diag_jmask_random"
| get_mask | python | abus-aikorea/voice-pro | src/demucs/transformer.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/transformer.py | MIT |
def __init__(self, channels: int, init: float = 0, channel_last=False):
"""
channel_last = False corresponds to (B, C, T) tensors
channel_last = True corresponds to (T, B, C) tensors
"""
super().__init__()
self.channel_last = channel_last
self.scale = nn.Parameter... |
channel_last = False corresponds to (B, C, T) tensors
channel_last = True corresponds to (T, B, C) tensors
| __init__ | python | abus-aikorea/voice-pro | src/demucs/transformer.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/transformer.py | MIT |
def forward(self, x):
"""
x: (B, T, C)
if num_groups=1: Normalisation on all T and C together for each B
"""
x = x.transpose(1, 2)
return super().forward(x).transpose(1, 2) |
x: (B, T, C)
if num_groups=1: Normalisation on all T and C together for each B
| forward | python | abus-aikorea/voice-pro | src/demucs/transformer.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/transformer.py | MIT |
def forward(self, src, src_mask=None, src_key_padding_mask=None):
"""
if batch_first = False, src shape is (T, B, C)
the case where batch_first=True is not covered
"""
device = src.device
x = src
T, B, C = x.shape
if self.sparse and not self.auto_sparsity:... |
if batch_first = False, src shape is (T, B, C)
the case where batch_first=True is not covered
| forward | python | abus-aikorea/voice-pro | src/demucs/transformer.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/transformer.py | MIT |
def forward(self, q, k, mask=None):
"""
Args:
q: tensor of shape (T, B, C)
k: tensor of shape (S, B, C)
mask: tensor of shape (T, S)
"""
device = q.device
T, B, C = q.shape
S, B, C = k.shape
if self.sparse and not self.auto_spa... |
Args:
q: tensor of shape (T, B, C)
k: tensor of shape (S, B, C)
mask: tensor of shape (T, S)
| forward | python | abus-aikorea/voice-pro | src/demucs/transformer.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/transformer.py | MIT |
def unfold(a, kernel_size, stride):
"""Given input of size [*OT, T], output Tensor of size [*OT, F, K]
with K the kernel size, by extracting frames with the given stride.
This will pad the input so that `F = ceil(T / K)`.
see https://github.com/pytorch/pytorch/issues/60466
"""
*shape, length =... | Given input of size [*OT, T], output Tensor of size [*OT, F, K]
with K the kernel size, by extracting frames with the given stride.
This will pad the input so that `F = ceil(T / K)`.
see https://github.com/pytorch/pytorch/issues/60466
| unfold | python | abus-aikorea/voice-pro | src/demucs/utils.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/utils.py | MIT |
def center_trim(tensor: torch.Tensor, reference: tp.Union[torch.Tensor, int]):
"""
Center trim `tensor` with respect to `reference`, along the last dimension.
`reference` can also be a number, representing the length to trim to.
If the size difference != 0 mod 2, the extra sample is removed on the right... |
Center trim `tensor` with respect to `reference`, along the last dimension.
`reference` can also be a number, representing the length to trim to.
If the size difference != 0 mod 2, the extra sample is removed on the right side.
| center_trim | python | abus-aikorea/voice-pro | src/demucs/utils.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/utils.py | MIT |
def EMA(beta: float = 1):
"""
Exponential Moving Average callback.
Returns a single function that can be called to repeatidly update the EMA
with a dict of metrics. The callback will return
the new averaged dict of metrics.
Note that for `beta=1`, this is just plain averaging.
"""
fix: ... |
Exponential Moving Average callback.
Returns a single function that can be called to repeatidly update the EMA
with a dict of metrics. The callback will return
the new averaged dict of metrics.
Note that for `beta=1`, this is just plain averaging.
| EMA | python | abus-aikorea/voice-pro | src/demucs/utils.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/utils.py | MIT |
def sizeof_fmt(num: float, suffix: str = 'B'):
"""
Given `num` bytes, return human readable size.
Taken from https://stackoverflow.com/a/1094933
"""
for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']:
if abs(num) < 1024.0:
return "%3.1f%s%s" % (num, unit, suffix)
... |
Given `num` bytes, return human readable size.
Taken from https://stackoverflow.com/a/1094933
| sizeof_fmt | python | abus-aikorea/voice-pro | src/demucs/utils.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/utils.py | MIT |
def build_metadata(path, sources, normalize=True, ext=EXT):
"""
Build the metadata for `Wavset`.
Args:
path (str or Path): path to dataset.
sources (list[str]): list of sources to look for.
normalize (bool): if True, loads full track and store normalization
values based ... |
Build the metadata for `Wavset`.
Args:
path (str or Path): path to dataset.
sources (list[str]): list of sources to look for.
normalize (bool): if True, loads full track and store normalization
values based on the mixture file.
ext (str): extension of audio files (d... | build_metadata | python | abus-aikorea/voice-pro | src/demucs/wav.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/wav.py | MIT |
def __init__(
self,
root, metadata, sources,
segment=None, shift=None, normalize=True,
samplerate=44100, channels=2, ext=EXT):
"""
Waveset (or mp3 set for that matter). Can be used to train
with arbitrary sources. Each track should be one folder in... |
Waveset (or mp3 set for that matter). Can be used to train
with arbitrary sources. Each track should be one folder inside of `path`.
The folder should contain files named `{source}.{ext}`.
Args:
root (Path or str): root folder for the dataset.
metadata (dict): o... | __init__ | python | abus-aikorea/voice-pro | src/demucs/wav.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/wav.py | MIT |
def get_wav_datasets(args, name='wav'):
"""Extract the wav datasets from the XP arguments."""
path = getattr(args, name)
sig = hashlib.sha1(str(path).encode()).hexdigest()[:8]
metadata_file = Path(args.metadata) / ('wav_' + sig + ".json")
train_path = Path(path) / "train"
valid_path = Path(path)... | Extract the wav datasets from the XP arguments. | get_wav_datasets | python | abus-aikorea/voice-pro | src/demucs/wav.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/wav.py | MIT |
def get_musdb_wav_datasets(args):
"""Extract the musdb dataset from the XP arguments."""
sig = hashlib.sha1(str(args.musdb).encode()).hexdigest()[:8]
metadata_file = Path(args.metadata) / ('musdb_' + sig + ".json")
root = Path(args.musdb) / "train"
if not metadata_file.is_file() and distrib.rank == ... | Extract the musdb dataset from the XP arguments. | get_musdb_wav_datasets | python | abus-aikorea/voice-pro | src/demucs/wav.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/wav.py | MIT |
def get_grid_metrics(self):
"""Return the metrics that should be displayed in the tracking table.
"""
return [
tt.group("train", [
tt.leaf("epoch"),
tt.leaf("reco", ".3f"),
], align=">"),
tt.group("valid", [
tt.... | Return the metrics that should be displayed in the tracking table.
| get_grid_metrics | python | abus-aikorea/voice-pro | src/demucs/grids/_explorers.py | https://github.com/abus-aikorea/voice-pro/blob/master/src/demucs/grids/_explorers.py | MIT |
def train(cfg: DictConfig) -> Tuple[Dict[str, Any], Dict[str, Any]]:
"""Trains the model. Can additionally evaluate on a testset, using best weights obtained during
training.
This method is wrapped in optional @task_wrapper decorator, that controls the behavior during
failure. Useful for multiruns, sav... | Trains the model. Can additionally evaluate on a testset, using best weights obtained during
training.
This method is wrapped in optional @task_wrapper decorator, that controls the behavior during
failure. Useful for multiruns, saving info about the crash, etc.
:param cfg: A DictConfig configuration c... | train | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/train.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/train.py | MIT |
def main(cfg: DictConfig) -> Optional[float]:
"""Main entry point for training.
:param cfg: DictConfig configuration composed by Hydra.
:return: Optional[float] with optimized metric value.
"""
# apply extra utilities
# (e.g. ask for tags if none are provided in cfg, print cfg tree, etc.)
u... | Main entry point for training.
:param cfg: DictConfig configuration composed by Hydra.
:return: Optional[float] with optimized metric value.
| main | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/train.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/train.py | MIT |
def setup(self, stage: Optional[str] = None): # pylint: disable=unused-argument
"""Load data. Set variables: `self.data_train`, `self.data_val`, `self.data_test`.
This method is called by lightning with both `trainer.fit()` and `trainer.test()`, so be
careful not to execute things like random ... | Load data. Set variables: `self.data_train`, `self.data_val`, `self.data_test`.
This method is called by lightning with both `trainer.fit()` and `trainer.test()`, so be
careful not to execute things like random split twice!
| setup | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/data/text_mel_datamodule.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/data/text_mel_datamodule.py | MIT |
def synthesise(self, x, x_lengths, n_timesteps, temperature=1.0, spks=None, length_scale=1.0):
"""
Generates mel-spectrogram from text. Returns:
1. encoder outputs
2. decoder outputs
3. generated alignment
Args:
x (torch.Tensor): batch of texts, c... |
Generates mel-spectrogram from text. Returns:
1. encoder outputs
2. decoder outputs
3. generated alignment
Args:
x (torch.Tensor): batch of texts, converted to a tensor with phoneme embedding ids.
shape: (batch_size, max_text_length)
... | synthesise | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/models/matcha_tts.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/models/matcha_tts.py | MIT |
def forward(self, x, x_lengths, y, y_lengths, spks=None, out_size=None, cond=None):
"""
Computes 3 losses:
1. duration loss: loss between predicted token durations and those extracted by Monotinic Alignment Search (MAS).
2. prior loss: loss between mel-spectrogram and encoder out... |
Computes 3 losses:
1. duration loss: loss between predicted token durations and those extracted by Monotinic Alignment Search (MAS).
2. prior loss: loss between mel-spectrogram and encoder outputs.
3. flow matching loss: loss between mel-spectrogram and decoder outputs.
... | forward | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/models/matcha_tts.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/models/matcha_tts.py | MIT |
def forward(self, x, mask, mu, t, spks=None, cond=None):
"""Forward pass of the UNet1DConditional model.
Args:
x (torch.Tensor): shape (batch_size, in_channels, time)
mask (_type_): shape (batch_size, 1, time)
t (_type_): shape (batch_size)
spks (_type_, ... | Forward pass of the UNet1DConditional model.
Args:
x (torch.Tensor): shape (batch_size, in_channels, time)
mask (_type_): shape (batch_size, 1, time)
t (_type_): shape (batch_size)
spks (_type_, optional): shape: (batch_size, condition_channels). Defaults to None... | forward | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/models/components/decoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/models/components/decoder.py | MIT |
def forward(self, mu, mask, n_timesteps, temperature=1.0, spks=None, cond=None):
"""Forward diffusion
Args:
mu (torch.Tensor): output of encoder
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): output_mask
shape: (batch_size, 1, me... | Forward diffusion
Args:
mu (torch.Tensor): output of encoder
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): output_mask
shape: (batch_size, 1, mel_timesteps)
n_timesteps (int): number of diffusion steps
temperatur... | forward | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/models/components/flow_matching.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/models/components/flow_matching.py | MIT |
def solve_euler(self, x, t_span, mu, mask, spks, cond):
"""
Fixed euler solver for ODEs.
Args:
x (torch.Tensor): random noise
t_span (torch.Tensor): n_timesteps interpolated
shape: (n_timesteps + 1,)
mu (torch.Tensor): output of encoder
... |
Fixed euler solver for ODEs.
Args:
x (torch.Tensor): random noise
t_span (torch.Tensor): n_timesteps interpolated
shape: (n_timesteps + 1,)
mu (torch.Tensor): output of encoder
shape: (batch_size, n_feats, mel_timesteps)
ma... | solve_euler | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/models/components/flow_matching.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/models/components/flow_matching.py | MIT |
def compute_loss(self, x1, mask, mu, spks=None, cond=None):
"""Computes diffusion loss
Args:
x1 (torch.Tensor): Target
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): target mask
shape: (batch_size, 1, mel_timesteps)
m... | Computes diffusion loss
Args:
x1 (torch.Tensor): Target
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): target mask
shape: (batch_size, 1, mel_timesteps)
mu (torch.Tensor): output of encoder
shape: (batch_size,... | compute_loss | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/models/components/flow_matching.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/models/components/flow_matching.py | MIT |
def __init__(self, d: int, base: int = 10_000):
r"""
* `d` is the number of features $d$
* `base` is the constant used for calculating $\Theta$
"""
super().__init__()
self.base = base
self.d = int(d)
self.cos_cached = None
self.sin_cached = None |
* `d` is the number of features $d$
* `base` is the constant used for calculating $\Theta$
| __init__ | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/models/components/text_encoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/models/components/text_encoder.py | MIT |
def forward(self, x: torch.Tensor):
"""
* `x` is the Tensor at the head of a key or a query with shape `[seq_len, batch_size, n_heads, d]`
"""
# Cache $\cos$ and $\sin$ values
x = rearrange(x, "b h t d -> t b h d")
self._build_cache(x)
# Split the features, we c... |
* `x` is the Tensor at the head of a key or a query with shape `[seq_len, batch_size, n_heads, d]`
| forward | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/models/components/text_encoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/models/components/text_encoder.py | MIT |
def forward(self, x, x_lengths, spks=None):
"""Run forward pass to the transformer based encoder and duration predictor
Args:
x (torch.Tensor): text input
shape: (batch_size, max_text_length)
x_lengths (torch.Tensor): text input lengths
shape: (ba... | Run forward pass to the transformer based encoder and duration predictor
Args:
x (torch.Tensor): text input
shape: (batch_size, max_text_length)
x_lengths (torch.Tensor): text input lengths
shape: (batch_size,)
spks (torch.Tensor, optional): s... | forward | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/models/components/text_encoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/models/components/text_encoder.py | MIT |
def __init__(self, in_features, out_features, alpha=1.0, alpha_trainable=True, alpha_logscale=True):
"""
Initialization.
INPUT:
- in_features: shape of the input
- alpha - trainable parameter that controls frequency
- beta - trainable parameter that controls m... |
Initialization.
INPUT:
- in_features: shape of the input
- alpha - trainable parameter that controls frequency
- beta - trainable parameter that controls magnitude
alpha is initialized to 1 by default, higher values = higher-frequency.
beta is... | __init__ | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/models/components/transformer.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/models/components/transformer.py | MIT |
def get_exportable_module(matcha, vocoder, n_timesteps):
"""
Return an appropriate `LighteningModule` and output-node names
based on whether the vocoder is embedded in the final graph
"""
def onnx_forward_func(x, x_lengths, scales, spks=None):
"""
Custom forward function for accept... |
Return an appropriate `LighteningModule` and output-node names
based on whether the vocoder is embedded in the final graph
| get_exportable_module | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/onnx/export.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/onnx/export.py | MIT |
def onnx_forward_func(x, x_lengths, scales, spks=None):
"""
Custom forward function for accepting
scaler parameters as tensors
"""
# Extract scaler parameters from tensors
temperature = scales[0]
length_scale = scales[1]
output = matcha.synthesise(x, x_len... |
Custom forward function for accepting
scaler parameters as tensors
| onnx_forward_func | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/onnx/export.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/onnx/export.py | MIT |
def basic_cleaners(text):
"""Basic pipeline that lowercases and collapses whitespace without transliteration."""
text = lowercase(text)
text = collapse_whitespace(text)
return text | Basic pipeline that lowercases and collapses whitespace without transliteration. | basic_cleaners | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/text/cleaners.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/text/cleaners.py | MIT |
def transliteration_cleaners(text):
"""Pipeline for non-English text that transliterates to ASCII."""
text = convert_to_ascii(text)
text = lowercase(text)
text = collapse_whitespace(text)
return text | Pipeline for non-English text that transliterates to ASCII. | transliteration_cleaners | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/text/cleaners.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/text/cleaners.py | MIT |
def english_cleaners2(text):
"""Pipeline for English text, including abbreviation expansion. + punctuation + stress"""
text = convert_to_ascii(text)
text = lowercase(text)
text = expand_abbreviations(text)
phonemes = global_phonemizer.phonemize([text], strip=True, njobs=1)[0]
phonemes = collapse... | Pipeline for English text, including abbreviation expansion. + punctuation + stress | english_cleaners2 | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/text/cleaners.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/text/cleaners.py | MIT |
def english_cleaners_piper(text):
"""Pipeline for English text, including abbreviation expansion. + punctuation + stress"""
text = convert_to_ascii(text)
text = lowercase(text)
text = expand_abbreviations(text)
phonemes = "".join(piper_phonemize.phonemize_espeak(text=text, voice="en-US")[0])
pho... | Pipeline for English text, including abbreviation expansion. + punctuation + stress | english_cleaners_piper | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/text/cleaners.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/text/cleaners.py | MIT |
def text_to_sequence(text, cleaner_names):
"""Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
Args:
text: string to convert to a sequence
cleaner_names: names of the cleaner functions to run the text through
Returns:
List of integers corresponding t... | Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
Args:
text: string to convert to a sequence
cleaner_names: names of the cleaner functions to run the text through
Returns:
List of integers corresponding to the symbols in the text
| text_to_sequence | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/text/__init__.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/text/__init__.py | MIT |
def sequence_to_text(sequence):
"""Converts a sequence of IDs back to a string"""
result = ""
for symbol_id in sequence:
s = _id_to_symbol[symbol_id]
result += s
return result | Converts a sequence of IDs back to a string | sequence_to_text | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/text/__init__.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/text/__init__.py | MIT |
def compute_data_statistics(data_loader: torch.utils.data.DataLoader, out_channels: int):
"""Generate data mean and standard deviation helpful in data normalisation
Args:
data_loader (torch.utils.data.Dataloader): _description_
out_channels (int): mel spectrogram channels
"""
total_mel_... | Generate data mean and standard deviation helpful in data normalisation
Args:
data_loader (torch.utils.data.Dataloader): _description_
out_channels (int): mel spectrogram channels
| compute_data_statistics | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/utils/generate_data_statistics.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/utils/generate_data_statistics.py | MIT |
def instantiate_callbacks(callbacks_cfg: DictConfig) -> List[Callback]:
"""Instantiates callbacks from config.
:param callbacks_cfg: A DictConfig object containing callback configurations.
:return: A list of instantiated callbacks.
"""
callbacks: List[Callback] = []
if not callbacks_cfg:
... | Instantiates callbacks from config.
:param callbacks_cfg: A DictConfig object containing callback configurations.
:return: A list of instantiated callbacks.
| instantiate_callbacks | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/utils/instantiators.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/utils/instantiators.py | MIT |
def instantiate_loggers(logger_cfg: DictConfig) -> List[Logger]:
"""Instantiates loggers from config.
:param logger_cfg: A DictConfig object containing logger configurations.
:return: A list of instantiated loggers.
"""
logger: List[Logger] = []
if not logger_cfg:
log.warning("No logge... | Instantiates loggers from config.
:param logger_cfg: A DictConfig object containing logger configurations.
:return: A list of instantiated loggers.
| instantiate_loggers | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/utils/instantiators.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/utils/instantiators.py | MIT |
def log_hyperparameters(object_dict: Dict[str, Any]) -> None:
"""Controls which config parts are saved by Lightning loggers.
Additionally saves:
- Number of model parameters
:param object_dict: A dictionary containing the following objects:
- `"cfg"`: A DictConfig object containing the mai... | Controls which config parts are saved by Lightning loggers.
Additionally saves:
- Number of model parameters
:param object_dict: A dictionary containing the following objects:
- `"cfg"`: A DictConfig object containing the main config.
- `"model"`: The Lightning model.
- `"train... | log_hyperparameters | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/utils/logging_utils.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/utils/logging_utils.py | MIT |
def get_pylogger(name: str = __name__) -> logging.Logger:
"""Initializes a multi-GPU-friendly python command line logger.
:param name: The name of the logger, defaults to ``__name__``.
:return: A logger object.
"""
logger = logging.getLogger(name)
# this ensures all logging levels get marked ... | Initializes a multi-GPU-friendly python command line logger.
:param name: The name of the logger, defaults to ``__name__``.
:return: A logger object.
| get_pylogger | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/utils/pylogger.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/utils/pylogger.py | MIT |
def print_config_tree(
cfg: DictConfig,
print_order: Sequence[str] = (
"data",
"model",
"callbacks",
"logger",
"trainer",
"paths",
"extras",
),
resolve: bool = False,
save_to_file: bool = False,
) -> None:
"""Prints the contents of a DictCo... | Prints the contents of a DictConfig as a tree structure using the Rich library.
:param cfg: A DictConfig composed by Hydra.
:param print_order: Determines in what order config components are printed. Default is ``("data", "model",
"callbacks", "logger", "trainer", "paths", "extras")``.
:param resolve: ... | print_config_tree | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/utils/rich_utils.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/utils/rich_utils.py | MIT |
def enforce_tags(cfg: DictConfig, save_to_file: bool = False) -> None:
"""Prompts user to input tags from command line if no tags are provided in config.
:param cfg: A DictConfig composed by Hydra.
:param save_to_file: Whether to export tags to the hydra output folder. Default is ``False``.
"""
if ... | Prompts user to input tags from command line if no tags are provided in config.
:param cfg: A DictConfig composed by Hydra.
:param save_to_file: Whether to export tags to the hydra output folder. Default is ``False``.
| enforce_tags | python | abus-aikorea/voice-pro | third_party/Matcha-TTS/matcha/utils/rich_utils.py | https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/utils/rich_utils.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.