code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def backward_G(self):
"""Calculate GAN and L1 loss for the generator"""
# First, G(A) should fake the discriminator
fake_AB = torch.cat((self.real_A, self.fake_B), 1)
pred_fake = self.netD(fake_AB)
self.loss_G_GAN = self.criterionGAN(pred_fake, True)
# Second, G(A) = B
... | Calculate GAN and L1 loss for the generator | backward_G | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/pix2pix4depth_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/pix2pix4depth_model.py | MIT |
def find_model_using_name(model_name):
"""Import the module "models/[model_name]_model.py".
In the file, the class called DatasetNameModel() will
be instantiated. It has to be a subclass of BaseModel,
and it is case-insensitive.
"""
model_filename = "pix2pix.models." + model_name + "_model"
... | Import the module "models/[model_name]_model.py".
In the file, the class called DatasetNameModel() will
be instantiated. It has to be a subclass of BaseModel,
and it is case-insensitive.
| find_model_using_name | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/__init__.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/__init__.py | MIT |
def create_model(opt):
"""Create a model given the option.
This function warps the class CustomDatasetDataLoader.
This is the main interface between this package and 'train.py'/'test.py'
Example:
>>> from models import create_model
>>> model = create_model(opt)
"""
model = find... | Create a model given the option.
This function warps the class CustomDatasetDataLoader.
This is the main interface between this package and 'train.py'/'test.py'
Example:
>>> from models import create_model
>>> model = create_model(opt)
| create_model | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/__init__.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/__init__.py | MIT |
def initialize(self, parser):
"""Define the common options that are used in both training and test."""
# basic parameters
parser.add_argument('--dataroot', help='path to images (should have subfolders trainA, trainB, valA, valB, etc)')
parser.add_argument('--name', type=str, default='voi... | Define the common options that are used in both training and test. | initialize | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/options/base_options.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/options/base_options.py | MIT |
def gather_options(self):
"""Initialize our parser with basic options(only once).
Add additional model-specific and dataset-specific options.
These options are defined in the <modify_commandline_options> function
in model and dataset classes.
"""
if not self.initialized: ... | Initialize our parser with basic options(only once).
Add additional model-specific and dataset-specific options.
These options are defined in the <modify_commandline_options> function
in model and dataset classes.
| gather_options | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/options/base_options.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/options/base_options.py | MIT |
def print_options(self, opt):
"""Print and save options
It will print both current options and default values(if different).
It will save options into a text file / [checkpoints_dir] / opt.txt
"""
message = ''
message += '----------------- Options ---------------\n'
... | Print and save options
It will print both current options and default values(if different).
It will save options into a text file / [checkpoints_dir] / opt.txt
| print_options | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/options/base_options.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/options/base_options.py | MIT |
def parse(self):
"""Parse our options, create checkpoints directory suffix, and set up gpu device."""
opt = self.gather_options()
opt.isTrain = self.isTrain # train or test
# process opt.suffix
if opt.suffix:
suffix = ('_' + opt.suffix.format(**vars(opt))) if opt.s... | Parse our options, create checkpoints directory suffix, and set up gpu device. | parse | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/options/base_options.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/options/base_options.py | MIT |
def get(self, save_path, dataset=None):
"""
Download a dataset.
Parameters:
save_path (str) -- A directory to save the data to.
dataset (str) -- (optional). A specific dataset to download.
Note: this must include the file extension.
... |
Download a dataset.
Parameters:
save_path (str) -- A directory to save the data to.
dataset (str) -- (optional). A specific dataset to download.
Note: this must include the file extension.
If None, options will be prese... | get | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/get_data.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/get_data.py | MIT |
def __init__(self, web_dir, title, refresh=0):
"""Initialize the HTML classes
Parameters:
web_dir (str) -- a directory that stores the webpage. HTML file will be created at <web_dir>/index.html; images will be saved at <web_dir/images/
title (str) -- the webpage name
... | Initialize the HTML classes
Parameters:
web_dir (str) -- a directory that stores the webpage. HTML file will be created at <web_dir>/index.html; images will be saved at <web_dir/images/
title (str) -- the webpage name
refresh (int) -- how often the website refresh itself; ... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/html.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/html.py | MIT |
def add_images(self, ims, txts, links, width=400):
"""add images to the HTML file
Parameters:
ims (str list) -- a list of image paths
txts (str list) -- a list of image names shown on the website
links (str list) -- a list of hyperref links; when you click an ima... | add images to the HTML file
Parameters:
ims (str list) -- a list of image paths
txts (str list) -- a list of image names shown on the website
links (str list) -- a list of hyperref links; when you click an image, it will redirect you to a new page
| add_images | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/html.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/html.py | MIT |
def save(self):
"""save the current content to the HMTL file"""
html_file = '%s/index.html' % self.web_dir
f = open(html_file, 'wt')
f.write(self.doc.render())
f.close() | save the current content to the HMTL file | save | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/html.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/html.py | MIT |
def __init__(self, pool_size):
"""Initialize the ImagePool class
Parameters:
pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created
"""
self.pool_size = pool_size
if self.pool_size > 0: # create an empty pool
self.num_imgs... | Initialize the ImagePool class
Parameters:
pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/image_pool.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/image_pool.py | MIT |
def query(self, images):
"""Return an image from the pool.
Parameters:
images: the latest generated images from the generator
Returns images from the buffer.
By 50/100, the buffer will return input images.
By 50/100, the buffer will return images previously stored ... | Return an image from the pool.
Parameters:
images: the latest generated images from the generator
Returns images from the buffer.
By 50/100, the buffer will return input images.
By 50/100, the buffer will return images previously stored in the buffer,
and insert th... | query | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/image_pool.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/image_pool.py | MIT |
def tensor2im(input_image, imtype=np.uint16):
""""Converts a Tensor array into a numpy image array.
Parameters:
input_image (tensor) -- the input image tensor array
imtype (type) -- the desired type of the converted numpy array
"""
if not isinstance(input_image, np.ndarray):
... | "Converts a Tensor array into a numpy image array.
Parameters:
input_image (tensor) -- the input image tensor array
imtype (type) -- the desired type of the converted numpy array
| tensor2im | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/util.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/util.py | MIT |
def diagnose_network(net, name='network'):
"""Calculate and print the mean of average absolute(gradients)
Parameters:
net (torch network) -- Torch network
name (str) -- the name of the network
"""
mean = 0.0
count = 0
for param in net.parameters():
if param.grad is not N... | Calculate and print the mean of average absolute(gradients)
Parameters:
net (torch network) -- Torch network
name (str) -- the name of the network
| diagnose_network | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/util.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/util.py | MIT |
def save_image(image_numpy, image_path, aspect_ratio=1.0):
"""Save a numpy image to the disk
Parameters:
image_numpy (numpy array) -- input numpy array
image_path (str) -- the path of the image
"""
image_pil = Image.fromarray(image_numpy)
image_pil = image_pil.convert('I;1... | Save a numpy image to the disk
Parameters:
image_numpy (numpy array) -- input numpy array
image_path (str) -- the path of the image
| save_image | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/util.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/util.py | MIT |
def print_numpy(x, val=True, shp=False):
"""Print the mean, min, max, median, std, and size of a numpy array
Parameters:
val (bool) -- if print the values of the numpy array
shp (bool) -- if print the shape of the numpy array
"""
x = x.astype(np.float64)
if shp:
print('shape... | Print the mean, min, max, median, std, and size of a numpy array
Parameters:
val (bool) -- if print the values of the numpy array
shp (bool) -- if print the shape of the numpy array
| print_numpy | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/util.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/util.py | MIT |
def mkdirs(paths):
"""create empty directories if they don't exist
Parameters:
paths (str list) -- a list of directory paths
"""
if isinstance(paths, list) and not isinstance(paths, str):
for path in paths:
mkdir(path)
else:
mkdir(paths) | create empty directories if they don't exist
Parameters:
paths (str list) -- a list of directory paths
| mkdirs | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/util.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/util.py | MIT |
def save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256):
"""Save images to the disk.
Parameters:
webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details)
visuals (OrderedDict) -- an ordered dictionary that stores (name, ima... | Save images to the disk.
Parameters:
webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details)
visuals (OrderedDict) -- an ordered dictionary that stores (name, images (either tensor or numpy) ) pairs
image_path (str) -- the string... | save_images | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/visualizer.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/visualizer.py | MIT |
def __init__(self, opt):
"""Initialize the Visualizer class
Parameters:
opt -- stores all the experiment flags; needs to be a subclass of BaseOptions
Step 1: Cache the training/test options
Step 2: connect to a visdom server
Step 3: create an HTML object for saveing ... | Initialize the Visualizer class
Parameters:
opt -- stores all the experiment flags; needs to be a subclass of BaseOptions
Step 1: Cache the training/test options
Step 2: connect to a visdom server
Step 3: create an HTML object for saveing HTML filters
Step 4: create ... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/visualizer.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/visualizer.py | MIT |
def create_visdom_connections(self):
"""If the program could not connect to Visdom server, this function will start a new server at port < self.port > """
cmd = sys.executable + ' -m visdom.server -p %d &>/dev/null &' % self.port
print('\n\nCould not connect to Visdom server. \n Trying to start ... | If the program could not connect to Visdom server, this function will start a new server at port < self.port > | create_visdom_connections | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/visualizer.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/visualizer.py | MIT |
def display_current_results(self, visuals, epoch, save_result):
"""Display current results on visdom; save current results to an HTML file.
Parameters:
visuals (OrderedDict) - - dictionary of images to display or save
epoch (int) - - the current epoch
save_result (bo... | Display current results on visdom; save current results to an HTML file.
Parameters:
visuals (OrderedDict) - - dictionary of images to display or save
epoch (int) - - the current epoch
save_result (bool) - - if save the current results to an HTML file
| display_current_results | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/visualizer.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/visualizer.py | MIT |
def print_current_losses(self, epoch, iters, losses, t_comp, t_data):
"""print current losses on console; also save the losses to the disk
Parameters:
epoch (int) -- current epoch
iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch)
... | print current losses on console; also save the losses to the disk
Parameters:
epoch (int) -- current epoch
iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch)
losses (OrderedDict) -- training losses stored in the format of (name... | print_current_losses | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/util/visualizer.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/util/visualizer.py | MIT |
def get_outpath():
"""Get path where results are saved by default"""
path = get_opt('outdir_samples', None)
if path is None or len(path) == 0:
path = get_opt('outdir_extras_samples', None)
assert path is not None and len(path) > 0
return path | Get path where results are saved by default | get_outpath | python | thygate/stable-diffusion-webui-depthmap-script | src/backbone.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/backbone.py | MIT |
def pano_depth_to_world_points(depth):
"""
360 depth to world points
given 2D depth is an equirectangular projection of a spherical image
Treat depth as radius
longitude : -pi to pi
latitude : -pi/2 to pi/2
"""
# Convert depth to radius
radius = depth.flatten()
lon = np.linspac... |
360 depth to world points
given 2D depth is an equirectangular projection of a spherical image
Treat depth as radius
longitude : -pi to pi
latitude : -pi/2 to pi/2
| pano_depth_to_world_points | python | thygate/stable-diffusion-webui-depthmap-script | src/core.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/core.py | MIT |
def depth_edges_mask(depth):
"""Returns a mask of edges in the depth map.
Args:
depth: 2D numpy array of shape (H, W) with dtype float32.
Returns:
mask: 2D numpy array of shape (H, W) with dtype bool.
"""
# Compute the x and y gradients of the depth map.
depth_dx, depth_dy = np.gradient(... | Returns a mask of edges in the depth map.
Args:
depth: 2D numpy array of shape (H, W) with dtype float32.
Returns:
mask: 2D numpy array of shape (H, W) with dtype bool.
| depth_edges_mask | python | thygate/stable-diffusion-webui-depthmap-script | src/core.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/core.py | MIT |
def load_models(self, model_type, device: torch.device, boost: bool, tiling_mode: bool = False):
"""Ensure that the depth model is loaded"""
# TODO: we need to at least try to find models downloaded by other plugins (e.g. controlnet)
# model path and name
# ZoeDepth and Marigold do not... | Ensure that the depth model is loaded | load_models | python | thygate/stable-diffusion-webui-depthmap-script | src/depthmap_generation.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/depthmap_generation.py | MIT |
def offload(self):
"""Move to RAM to conserve VRAM"""
if self.device != torch.device('cpu') and not self.offloaded:
self.move_models_to(torch.device('cpu'))
self.offloaded = True | Move to RAM to conserve VRAM | offload | python | thygate/stable-diffusion-webui-depthmap-script | src/depthmap_generation.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/depthmap_generation.py | MIT |
def get_raw_prediction(self, input, net_width, net_height):
"""Get prediction from the model currently loaded by the ModelHolder object.
If boost is enabled, net_width and net_height will be ignored."""
global depthmap_device
depthmap_device = self.device
# input image
im... | Get prediction from the model currently loaded by the ModelHolder object.
If boost is enabled, net_width and net_height will be ignored. | get_raw_prediction | python | thygate/stable-diffusion-webui-depthmap-script | src/depthmap_generation.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/depthmap_generation.py | MIT |
def scale_torch(img):
"""
Scale the image and output it in torch.tensor.
:param img: input rgb is in shape [H, W, C], input depth/disp is in shape [H, W]
:param scale: the scale factor. float
:return: img. [C, H, W]
"""
if len(img.shape) == 2:
img = img[np.newaxis, :, :]
if img.s... |
Scale the image and output it in torch.tensor.
:param img: input rgb is in shape [H, W, C], input depth/disp is in shape [H, W]
:param scale: the scale factor. float
:return: img. [C, H, W]
| scale_torch | python | thygate/stable-diffusion-webui-depthmap-script | src/depthmap_generation.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/depthmap_generation.py | MIT |
def print_options(self, opt):
"""Print and save options
It will print both current options and default values(if different).
It will save options into a text file / [checkpoints_dir] / opt.txt
"""
message = ''
message += '----------------- Options ---------------\n'
... | Print and save options
It will print both current options and default values(if different).
It will save options into a text file / [checkpoints_dir] / opt.txt
| print_options | python | thygate/stable-diffusion-webui-depthmap-script | src/depthmap_generation.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/depthmap_generation.py | MIT |
def parse(self):
"""Parse our options, create checkpoints directory suffix, and set up gpu device."""
opt = self.gather_options()
opt.isTrain = self.isTrain # train or test
# process opt.suffix
if opt.suffix:
suffix = ('_' + opt.suffix.format(**vars(opt))) if opt.su... | Parse our options, create checkpoints directory suffix, and set up gpu device. | parse | python | thygate/stable-diffusion-webui-depthmap-script | src/depthmap_generation.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/depthmap_generation.py | MIT |
def print_options(self, opt):
"""Print and save options
It will print both current options and default values(if different).
It will save options into a text file / [checkpoints_dir] / opt.txt
"""
message = ''
message += '----------------- Options ---------------\n'
... | Print and save options
It will print both current options and default values(if different).
It will save options into a text file / [checkpoints_dir] / opt.txt
| print_options | python | thygate/stable-diffusion-webui-depthmap-script | src/depthmap_generation.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/depthmap_generation.py | MIT |
def parse(self):
"""Parse our options, create checkpoints directory suffix, and set up gpu device."""
opt = self.gather_options()
opt.isTrain = self.isTrain # train or test
# process opt.suffix
if opt.suffix:
suffix = ('_' + opt.suffix.format(**vars(opt))) if opt.su... | Parse our options, create checkpoints directory suffix, and set up gpu device. | parse | python | thygate/stable-diffusion-webui-depthmap-script | src/depthmap_generation.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/depthmap_generation.py | MIT |
def __ior__(self, thing):
"""Add an extra bundle into your bundle, so you could have more bundeled items in your bundle."""
assert isinstance(thing, GradioComponentBundle), "Use += or -= for bundling elements"
for key in list(thing.internal.keys()):
self._raw_assignment(key, thing[ke... | Add an extra bundle into your bundle, so you could have more bundeled items in your bundle. | __ior__ | python | thygate/stable-diffusion-webui-depthmap-script | src/gradio_args_transport.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/gradio_args_transport.py | MIT |
def enkey_tail(self):
"""Must be the last element of the bundle for unbundling to work"""
keys = sorted(list(self.internal.keys()))
head = gr.HTML(elem_id="zzz_depthmap_enkey", value="\u222F" + "\u222F".join(keys), visible=False)
return head | Must be the last element of the bundle for unbundling to work | enkey_tail | python | thygate/stable-diffusion-webui-depthmap-script | src/gradio_args_transport.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/gradio_args_transport.py | MIT |
def enkey_to_dict(inp):
"""Unbundle: get a dictionary with stuff after it is sent bby the gradio to the function.
Enkey format: bunch of Gradio components,
then a Gradio component, which value is concatination of names of the previous Gradio objects"""
assert inp[-1].startswith("\u222F")... | Unbundle: get a dictionary with stuff after it is sent bby the gradio to the function.
Enkey format: bunch of Gradio components,
then a Gradio component, which value is concatination of names of the previous Gradio objects | enkey_to_dict | python | thygate/stable-diffusion-webui-depthmap-script | src/gradio_args_transport.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/gradio_args_transport.py | MIT |
def create_normalmap(depthmap,
pre_blur = None, sobel_gradient = 3, post_blur = None,
invert=False):
"""Generates normalmaps.
:param depthmap: depthmap that will be used to generate normalmap
:param pre_blur: apply gaussian blur before taking gradient, -1 for disabl... | Generates normalmaps.
:param depthmap: depthmap that will be used to generate normalmap
:param pre_blur: apply gaussian blur before taking gradient, -1 for disable, otherwise kernel size
:param sobel_gradient: use Sobel gradient, None for regular gradient, otherwise kernel size
:param post_blur: apply g... | create_normalmap | python | thygate/stable-diffusion-webui-depthmap-script | src/normalmap_generation.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/normalmap_generation.py | MIT |
def create_stereoimages(original_image, depthmap, divergence, separation=0.0, modes=None,
stereo_balance=0.0, stereo_offset_exponent=1.0, fill_technique='polylines_sharp'):
"""Creates stereoscopic images.
An effort is made to make them look nice, but beware that the resulting image will ... | Creates stereoscopic images.
An effort is made to make them look nice, but beware that the resulting image will have some distortion.
The correctness was not rigorously tested.
:param original_image: original image from which the 3D image (stereoimage) will be created
:param depthmap: depthmap correspo... | create_stereoimages | python | thygate/stable-diffusion-webui-depthmap-script | src/stereoimage_generation.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/stereoimage_generation.py | MIT |
def open_path_as_images(path, maybe_depthvideo=False):
"""Takes the filepath, returns (fps, frames). Every frame is a Pillow Image object"""
suffix = pathlib.Path(path).suffix
if suffix.lower() == '.gif':
frames = []
img = Image.open(path)
for i in range(img.n_frames):
im... | Takes the filepath, returns (fps, frames). Every frame is a Pillow Image object | open_path_as_images | python | thygate/stable-diffusion-webui-depthmap-script | src/video_mode.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/video_mode.py | MIT |
def global_scaling(objs, a=None, b=None):
"""Normalizes objs, but uses (a, b) instead of (minimum, maximum) value of objs, if supplied"""
normalized = []
min_value = a if a is not None else min([obj.min() for obj in objs])
max_value = b if b is not None else max([obj.max() for obj in obj... | Normalizes objs, but uses (a, b) instead of (minimum, maximum) value of objs, if supplied | global_scaling | python | thygate/stable-diffusion-webui-depthmap-script | src/video_mode.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/src/video_mode.py | MIT |
def data_apr(self):
"""
passengerInfo 1#XXXX#1#***************77X#bf6ae40d3655ae7eff005ee21d95876b38ab97a8031b464bc2f74a067e3ec957;
jzParam 2019-08-31#19#00
hbTrain 5l000G177230,O#
lkParam
:return:
"""
ticker = TickerConfig.PASSENGER_TICKER_STR.get(TickerC... |
passengerInfo 1#XXXX#1#***************77X#bf6ae40d3655ae7eff005ee21d95876b38ab97a8031b464bc2f74a067e3ec957;
jzParam 2019-08-31#19#00
hbTrain 5l000G177230,O#
lkParam
:return:
| data_apr | python | testerSunshine/12306 | inter/ConfirmHB.py | https://github.com/testerSunshine/12306/blob/master/inter/ConfirmHB.py | MIT |
def __iter__(self):
""" Return an iterator over the source dataset processed by the
given processor.
"""
assert self.source is not None
assert callable(self.f)
return self.f(iter(self.source), *self.args, **self.kw) | Return an iterator over the source dataset processed by the
given processor.
| __iter__ | python | abus-aikorea/voice-pro | cosyvoice/dataset/dataset.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/dataset.py | MIT |
def sample(self, data):
""" Sample data according to rank/world_size/num_workers
Args:
data(List): input data list
Returns:
List: data list after sample
"""
data = list(range(len(data)))
# force datalist even
if self.parti... | Sample data according to rank/world_size/num_workers
Args:
data(List): input data list
Returns:
List: data list after sample
| sample | python | abus-aikorea/voice-pro | cosyvoice/dataset/dataset.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/dataset.py | MIT |
def Dataset(data_list_file,
data_pipeline,
mode='train',
gan=False,
shuffle=True,
partition=True,
tts_file='',
prompt_utt2data=''):
""" Construct dataset from arguments
We have two shuffle stage in the Dataset. The first is... | Construct dataset from arguments
We have two shuffle stage in the Dataset. The first is global
shuffle at shards tar/raw file level. The second is global shuffle
at training samples level.
Args:
data_type(str): raw/shard
tokenizer (BaseTokenizer): tokenizer to ... | Dataset | python | abus-aikorea/voice-pro | cosyvoice/dataset/dataset.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/dataset.py | MIT |
def parquet_opener(data, mode='train', tts_data={}):
""" Give url or local file, return file descriptor
Inplace operation.
Args:
data(Iterable[str]): url or local file list
Returns:
Iterable[{src, stream}]
"""
for sample in data:
assert 'src' in samp... | Give url or local file, return file descriptor
Inplace operation.
Args:
data(Iterable[str]): url or local file list
Returns:
Iterable[{src, stream}]
| parquet_opener | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def filter(data,
max_length=10240,
min_length=10,
token_max_length=200,
token_min_length=1,
min_output_input_ratio=0.0005,
max_output_input_ratio=1,
mode='train'):
""" Filter sample according to feature and label length
Inplace ope... | Filter sample according to feature and label length
Inplace operation.
Args::
data: Iterable[{key, wav, label, sample_rate}]
max_length: drop utterance which is greater than max_length(10ms)
min_length: drop utterance which is less than min_length(10ms)
... | filter | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def resample(data, resample_rate=22050, min_sample_rate=16000, mode='train'):
""" Resample data.
Inplace operation.
Args:
data: Iterable[{key, wav, label, sample_rate}]
resample_rate: target resample rate
Returns:
Iterable[{key, wav, label, sample_rate}]... | Resample data.
Inplace operation.
Args:
data: Iterable[{key, wav, label, sample_rate}]
resample_rate: target resample rate
Returns:
Iterable[{key, wav, label, sample_rate}]
| resample | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def truncate(data, truncate_length=24576, mode='train'):
""" Truncate data.
Args:
data: Iterable[{key, wav, label, sample_rate}]
truncate_length: truncate length
Returns:
Iterable[{key, wav, label, sample_rate}]
"""
for sample in data:
waveform =... | Truncate data.
Args:
data: Iterable[{key, wav, label, sample_rate}]
truncate_length: truncate length
Returns:
Iterable[{key, wav, label, sample_rate}]
| truncate | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def compute_fbank(data,
feat_extractor,
mode='train'):
""" Extract fbank
Args:
data: Iterable[{key, wav, label, sample_rate}]
Returns:
Iterable[{key, feat, label}]
"""
for sample in data:
assert 'sample_rate' in sample
... | Extract fbank
Args:
data: Iterable[{key, wav, label, sample_rate}]
Returns:
Iterable[{key, feat, label}]
| compute_fbank | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def compute_f0(data, sample_rate, hop_size, mode='train'):
""" Extract f0
Args:
data: Iterable[{key, wav, label, sample_rate}]
Returns:
Iterable[{key, feat, label}]
"""
frame_period = hop_size * 1000 / sample_rate
for sample in data:
assert 'sample_rate'... | Extract f0
Args:
data: Iterable[{key, wav, label, sample_rate}]
Returns:
Iterable[{key, feat, label}]
| compute_f0 | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def parse_embedding(data, normalize, mode='train'):
""" Parse utt_embedding/spk_embedding
Args:
data: Iterable[{key, wav, label, sample_rate}]
Returns:
Iterable[{key, feat, label}]
"""
for sample in data:
sample['utt_embedding'] = torch.tensor(sample['utt_em... | Parse utt_embedding/spk_embedding
Args:
data: Iterable[{key, wav, label, sample_rate}]
Returns:
Iterable[{key, feat, label}]
| parse_embedding | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def tokenize(data, get_tokenizer, allowed_special, mode='train'):
""" Decode text to chars or BPE
Inplace operation
Args:
data: Iterable[{key, wav, txt, sample_rate}]
Returns:
Iterable[{key, wav, txt, tokens, label, sample_rate}]
"""
tokenizer = get_tokenize... | Decode text to chars or BPE
Inplace operation
Args:
data: Iterable[{key, wav, txt, sample_rate}]
Returns:
Iterable[{key, wav, txt, tokens, label, sample_rate}]
| tokenize | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def shuffle(data, shuffle_size=10000, mode='train'):
""" Local shuffle the data
Args:
data: Iterable[{key, feat, label}]
shuffle_size: buffer size for shuffle
Returns:
Iterable[{key, feat, label}]
"""
buf = []
for sample in data:
buf.append(s... | Local shuffle the data
Args:
data: Iterable[{key, feat, label}]
shuffle_size: buffer size for shuffle
Returns:
Iterable[{key, feat, label}]
| shuffle | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def sort(data, sort_size=500, mode='train'):
""" Sort the data by feature length.
Sort is used after shuffle and before batch, so we can group
utts with similar lengths into a batch, and `sort_size` should
be less than `shuffle_size`
Args:
data: Iterable[{key, feat, labe... | Sort the data by feature length.
Sort is used after shuffle and before batch, so we can group
utts with similar lengths into a batch, and `sort_size` should
be less than `shuffle_size`
Args:
data: Iterable[{key, feat, label}]
sort_size: buffer size for sort
... | sort | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def static_batch(data, batch_size=16):
""" Static batch the data by `batch_size`
Args:
data: Iterable[{key, feat, label}]
batch_size: batch size
Returns:
Iterable[List[{key, feat, label}]]
"""
buf = []
for sample in data:
buf.append(sample)
... | Static batch the data by `batch_size`
Args:
data: Iterable[{key, feat, label}]
batch_size: batch size
Returns:
Iterable[List[{key, feat, label}]]
| static_batch | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def dynamic_batch(data, max_frames_in_batch=12000, mode='train'):
""" Dynamic batch the data until the total frames in batch
reach `max_frames_in_batch`
Args:
data: Iterable[{key, feat, label}]
max_frames_in_batch: max_frames in one batch
Returns:
Iterab... | Dynamic batch the data until the total frames in batch
reach `max_frames_in_batch`
Args:
data: Iterable[{key, feat, label}]
max_frames_in_batch: max_frames in one batch
Returns:
Iterable[List[{key, feat, label}]]
| dynamic_batch | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def padding(data, use_spk_embedding, mode='train', gan=False):
""" Padding the data into training data
Args:
data: Iterable[List[{key, feat, label}]]
Returns:
Iterable[Tuple(keys, feats, labels, feats lengths, label lengths)]
"""
for sample in data:
assert i... | Padding the data into training data
Args:
data: Iterable[List[{key, feat, label}]]
Returns:
Iterable[Tuple(keys, feats, labels, feats lengths, label lengths)]
| padding | python | abus-aikorea/voice-pro | cosyvoice/dataset/processor.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/dataset/processor.py | MIT |
def __init__(
self,
in_channels,
out_channels,
causal=False,
channels=(256, 256),
dropout=0.05,
attention_head_dim=64,
n_blocks=1,
num_mid_blocks=2,
num_heads=4,
act_fn="snake",
):
"""
This decoder requires an in... |
This decoder requires an input with the same shape of the target. So, if your text content
is shorter or longer than the outputs, please re-sampling it before feeding to the decoder.
| __init__ | python | abus-aikorea/voice-pro | cosyvoice/flow/decoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/flow/decoder.py | MIT |
def forward(self, x, mask, mu, t, spks=None, cond=None):
"""Forward pass of the UNet1DConditional model.
Args:
x (torch.Tensor): shape (batch_size, in_channels, time)
mask (_type_): shape (batch_size, 1, time)
t (_type_): shape (batch_size)
spks (_type_, ... | Forward pass of the UNet1DConditional model.
Args:
x (torch.Tensor): shape (batch_size, in_channels, time)
mask (_type_): shape (batch_size, 1, time)
t (_type_): shape (batch_size)
spks (_type_, optional): shape: (batch_size, condition_channels). Defaults to None... | forward | python | abus-aikorea/voice-pro | cosyvoice/flow/decoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/flow/decoder.py | MIT |
def forward(self, mu, mask, n_timesteps, temperature=1.0, spks=None, cond=None, prompt_len=0, flow_cache=torch.zeros(1, 80, 0, 2)):
"""Forward diffusion
Args:
mu (torch.Tensor): output of encoder
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): ou... | Forward diffusion
Args:
mu (torch.Tensor): output of encoder
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): output_mask
shape: (batch_size, 1, mel_timesteps)
n_timesteps (int): number of diffusion steps
temperatur... | forward | python | abus-aikorea/voice-pro | cosyvoice/flow/flow_matching.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/flow/flow_matching.py | MIT |
def solve_euler(self, x, t_span, mu, mask, spks, cond):
"""
Fixed euler solver for ODEs.
Args:
x (torch.Tensor): random noise
t_span (torch.Tensor): n_timesteps interpolated
shape: (n_timesteps + 1,)
mu (torch.Tensor): output of encoder
... |
Fixed euler solver for ODEs.
Args:
x (torch.Tensor): random noise
t_span (torch.Tensor): n_timesteps interpolated
shape: (n_timesteps + 1,)
mu (torch.Tensor): output of encoder
shape: (batch_size, n_feats, mel_timesteps)
ma... | solve_euler | python | abus-aikorea/voice-pro | cosyvoice/flow/flow_matching.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/flow/flow_matching.py | MIT |
def compute_loss(self, x1, mask, mu, spks=None, cond=None):
"""Computes diffusion loss
Args:
x1 (torch.Tensor): Target
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): target mask
shape: (batch_size, 1, mel_timesteps)
m... | Computes diffusion loss
Args:
x1 (torch.Tensor): Target
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): target mask
shape: (batch_size, 1, mel_timesteps)
mu (torch.Tensor): output of encoder
shape: (batch_size,... | compute_loss | python | abus-aikorea/voice-pro | cosyvoice/flow/flow_matching.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/flow/flow_matching.py | MIT |
def forward(self, mu, mask, n_timesteps, temperature=1.0, spks=None, cond=None):
"""Forward diffusion
Args:
mu (torch.Tensor): output of encoder
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): output_mask
shape: (batch_size, 1, me... | Forward diffusion
Args:
mu (torch.Tensor): output of encoder
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): output_mask
shape: (batch_size, 1, mel_timesteps)
n_timesteps (int): number of diffusion steps
temperatur... | forward | python | abus-aikorea/voice-pro | cosyvoice/flow/flow_matching.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/flow/flow_matching.py | MIT |
def __init__(
self,
fft_sizes: Tuple[int, ...] = (2048, 1024, 512),
num_embeddings: Optional[int] = None,
):
"""
Multi-Resolution Discriminator module adapted from https://github.com/descriptinc/descript-audio-codec.
Additionally, it allows incorporating conditional i... |
Multi-Resolution Discriminator module adapted from https://github.com/descriptinc/descript-audio-codec.
Additionally, it allows incorporating conditional information with a learned embeddings table.
Args:
fft_sizes (tuple[int]): Tuple of window lengths for FFT. Defaults to (2048, 1... | __init__ | python | abus-aikorea/voice-pro | cosyvoice/hifigan/discriminator.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/hifigan/discriminator.py | MIT |
def forward(self, x):
"""
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
F0_sampled (batchsize, length, 1)
Sine_source (batchsize, length, 1)
noise_source (batchsize, length 1)
"""
# source for harmonic branch
with torch.no_grad():
s... |
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
F0_sampled (batchsize, length, 1)
Sine_source (batchsize, length, 1)
noise_source (batchsize, length 1)
| forward | python | abus-aikorea/voice-pro | cosyvoice/hifigan/generator.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/hifigan/generator.py | MIT |
def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False):
'''
Initialization.
INPUT:
- in_features: shape of the input
- alpha: trainable parameter
alpha is initialized to 1 by default, higher values = higher-frequency.
... |
Initialization.
INPUT:
- in_features: shape of the input
- alpha: trainable parameter
alpha is initialized to 1 by default, higher values = higher-frequency.
alpha will be trained along with the rest of your model.
| __init__ | python | abus-aikorea/voice-pro | cosyvoice/transformer/activation.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/activation.py | MIT |
def forward_qkv(
self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Transform query, key and value.
Args:
query (torch.Tensor): Query tensor (#batch, time1, size).
key (torch.Tensor): Key tensor ... | Transform query, key and value.
Args:
query (torch.Tensor): Query tensor (#batch, time1, size).
key (torch.Tensor): Key tensor (#batch, time2, size).
value (torch.Tensor): Value tensor (#batch, time2, size).
Returns:
torch.Tensor: Transformed query tenso... | forward_qkv | python | abus-aikorea/voice-pro | cosyvoice/transformer/attention.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/attention.py | MIT |
def forward_attention(
self,
value: torch.Tensor,
scores: torch.Tensor,
mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool)
) -> torch.Tensor:
"""Compute attention context vector.
Args:
value (torch.Tensor): Transformed value, size
... | Compute attention context vector.
Args:
value (torch.Tensor): Transformed value, size
(#batch, n_head, time2, d_k).
scores (torch.Tensor): Attention score, size
(#batch, n_head, time1, time2).
mask (torch.Tensor): Mask, size (#batch, 1, time2)... | forward_attention | python | abus-aikorea/voice-pro | cosyvoice/transformer/attention.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/attention.py | MIT |
def forward(
self,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
pos_emb: torch.Tensor = torch.empty(0),
cache: torch.Tensor = torch.zeros((0, 0, 0, 0))
) -> Tuple[torch.Tensor, torch... | Compute scaled dot product attention.
Args:
query (torch.Tensor): Query tensor (#batch, time1, size).
key (torch.Tensor): Key tensor (#batch, time2, size).
value (torch.Tensor): Value tensor (#batch, time2, size).
mask (torch.Tensor): Mask tensor (#batch, 1, time... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/attention.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/attention.py | MIT |
def rel_shift(self, x: torch.Tensor) -> torch.Tensor:
"""Compute relative positional encoding.
Args:
x (torch.Tensor): Input tensor (batch, head, time1, 2*time1-1).
time1 means the length of query vector.
Returns:
torch.Tensor: Output tensor.
"""
... | Compute relative positional encoding.
Args:
x (torch.Tensor): Input tensor (batch, head, time1, 2*time1-1).
time1 means the length of query vector.
Returns:
torch.Tensor: Output tensor.
| rel_shift | python | abus-aikorea/voice-pro | cosyvoice/transformer/attention.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/attention.py | MIT |
def forward(
self,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
pos_emb: torch.Tensor = torch.empty(0),
cache: torch.Tensor = torch.zeros((0, 0, 0, 0))
) -> Tuple[torch.Tensor, torch... | Compute 'Scaled Dot Product Attention' with rel. positional encoding.
Args:
query (torch.Tensor): Query tensor (#batch, time1, size).
key (torch.Tensor): Key tensor (#batch, time2, size).
value (torch.Tensor): Value tensor (#batch, time2, size).
mask (torch.Tensor... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/attention.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/attention.py | MIT |
def __init__(self,
channels: int,
kernel_size: int = 15,
activation: nn.Module = nn.ReLU(),
norm: str = "batch_norm",
causal: bool = False,
bias: bool = True):
"""Construct an ConvolutionModule object.
... | Construct an ConvolutionModule object.
Args:
channels (int): The number of channels of conv layers.
kernel_size (int): Kernel size of conv layers.
causal (int): Whether use causal convolution or not
| __init__ | python | abus-aikorea/voice-pro | cosyvoice/transformer/convolution.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/convolution.py | MIT |
def forward(
self,
x: torch.Tensor,
mask_pad: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
cache: torch.Tensor = torch.zeros((0, 0, 0)),
) -> Tuple[torch.Tensor, torch.Tensor]:
"""Compute convolution module.
Args:
x (torch.Tensor): Input tensor ... | Compute convolution module.
Args:
x (torch.Tensor): Input tensor (#batch, time, channels).
mask_pad (torch.Tensor): used for batch padding (#batch, 1, time),
(0, 0, 0) means fake mask.
cache (torch.Tensor): left context cache, it is only
used i... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/convolution.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/convolution.py | MIT |
def forward(
self,
memory: torch.Tensor,
memory_mask: torch.Tensor,
ys_in_pad: torch.Tensor,
ys_in_lens: torch.Tensor,
r_ys_in_pad: torch.Tensor = torch.empty(0),
reverse_weight: float = 0.0,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""For... | Forward decoder.
Args:
memory: encoded memory, float32 (batch, maxlen_in, feat)
memory_mask: encoder memory mask, (batch, 1, maxlen_in)
ys_in_pad: padded input token ids, int64 (batch, maxlen_out)
ys_in_lens: input lengths of this batch (batch)
r_ys_i... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/decoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/decoder.py | MIT |
def forward_one_step(
self,
memory: torch.Tensor,
memory_mask: torch.Tensor,
tgt: torch.Tensor,
tgt_mask: torch.Tensor,
cache: Optional[List[torch.Tensor]] = None,
) -> Tuple[torch.Tensor, List[torch.Tensor]]:
"""Forward one step.
This is only used... | Forward one step.
This is only used for decoding.
Args:
memory: encoded memory, float32 (batch, maxlen_in, feat)
memory_mask: encoded memory mask, (batch, 1, maxlen_in)
tgt: input token ids, int64 (batch, maxlen_out)
tgt_mask: input token mask, (batc... | forward_one_step | python | abus-aikorea/voice-pro | cosyvoice/transformer/decoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/decoder.py | MIT |
def tie_or_clone_weights(self, jit_mode: bool = True):
"""Tie or clone module weights (between word_emb and output_layer)
depending of whether we are using TorchScript or not"""
if not self.use_output_layer:
return
if jit_mode:
logging.info("clone emb.weight t... | Tie or clone module weights (between word_emb and output_layer)
depending of whether we are using TorchScript or not | tie_or_clone_weights | python | abus-aikorea/voice-pro | cosyvoice/transformer/decoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/decoder.py | MIT |
def forward(
self,
memory: torch.Tensor,
memory_mask: torch.Tensor,
ys_in_pad: torch.Tensor,
ys_in_lens: torch.Tensor,
r_ys_in_pad: torch.Tensor,
reverse_weight: float = 0.0,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Forward decoder.
... | Forward decoder.
Args:
memory: encoded memory, float32 (batch, maxlen_in, feat)
memory_mask: encoder memory mask, (batch, 1, maxlen_in)
ys_in_pad: padded input token ids, int64 (batch, maxlen_out)
ys_in_lens: input lengths of this batch (batch)
r_ys_i... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/decoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/decoder.py | MIT |
def forward_one_step(
self,
memory: torch.Tensor,
memory_mask: torch.Tensor,
tgt: torch.Tensor,
tgt_mask: torch.Tensor,
cache: Optional[List[torch.Tensor]] = None,
) -> Tuple[torch.Tensor, List[torch.Tensor]]:
"""Forward one step.
This is only used... | Forward one step.
This is only used for decoding.
Args:
memory: encoded memory, float32 (batch, maxlen_in, feat)
memory_mask: encoded memory mask, (batch, 1, maxlen_in)
tgt: input token ids, int64 (batch, maxlen_out)
tgt_mask: input token mask, (batc... | forward_one_step | python | abus-aikorea/voice-pro | cosyvoice/transformer/decoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/decoder.py | MIT |
def forward(
self,
tgt: torch.Tensor,
tgt_mask: torch.Tensor,
memory: torch.Tensor,
memory_mask: torch.Tensor,
cache: Optional[torch.Tensor] = None
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
"""Compute decoded features.
Args:
... | Compute decoded features.
Args:
tgt (torch.Tensor): Input tensor (#batch, maxlen_out, size).
tgt_mask (torch.Tensor): Mask for input tensor
(#batch, maxlen_out).
memory (torch.Tensor): Encoded memory
(#batch, maxlen_in, size).
memo... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/decoder_layer.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/decoder_layer.py | MIT |
def forward(self,
x: torch.Tensor,
offset: Union[int, torch.Tensor] = 0) \
-> Tuple[torch.Tensor, torch.Tensor]:
"""Add positional encoding.
Args:
x (torch.Tensor): Input. Its shape is (batch, time, ...)
offset (int, torch.tensor): pos... | Add positional encoding.
Args:
x (torch.Tensor): Input. Its shape is (batch, time, ...)
offset (int, torch.tensor): position offset
Returns:
torch.Tensor: Encoded tensor. Its shape is (batch, time, ...)
torch.Tensor: for compatibility to RelPositionalEnc... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/embedding.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/embedding.py | MIT |
def position_encoding(self,
offset: Union[int, torch.Tensor],
size: int,
apply_dropout: bool = True) -> torch.Tensor:
""" For getting encoding in a streaming fashion
Attention!!!!!
we apply dropout only once at the wh... | For getting encoding in a streaming fashion
Attention!!!!!
we apply dropout only once at the whole utterance level in a none
streaming way, but will call this function several times with
increasing input size in a streaming scenario, so the dropout will
be applied several times... | position_encoding | python | abus-aikorea/voice-pro | cosyvoice/transformer/embedding.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/embedding.py | MIT |
def forward(self,
x: torch.Tensor,
offset: Union[int, torch.Tensor] = 0) \
-> Tuple[torch.Tensor, torch.Tensor]:
"""Compute positional encoding.
Args:
x (torch.Tensor): Input tensor (batch, time, `*`).
Returns:
torch.Tensor: Enc... | Compute positional encoding.
Args:
x (torch.Tensor): Input tensor (batch, time, `*`).
Returns:
torch.Tensor: Encoded tensor (batch, time, `*`).
torch.Tensor: Positional embedding tensor (1, time, `*`).
| forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/embedding.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/embedding.py | MIT |
def forward(self,
x: torch.Tensor,
offset: Union[int, torch.Tensor] = 0) \
-> Tuple[torch.Tensor, torch.Tensor]:
""" Just return zero vector for interface compatibility
"""
pos_emb = torch.zeros(1, x.size(1), self.d_model).to(x.device)
return s... | Just return zero vector for interface compatibility
| forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/embedding.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/embedding.py | MIT |
def forward(self, x: torch.Tensor, offset: Union[int, torch.Tensor] = 0) \
-> Tuple[torch.Tensor, torch.Tensor]:
"""Add positional encoding.
Args:
x (torch.Tensor): Input tensor (batch, time, `*`).
Returns:
torch.Tensor: Encoded tensor (batch, time, `*`).
... | Add positional encoding.
Args:
x (torch.Tensor): Input tensor (batch, time, `*`).
Returns:
torch.Tensor: Encoded tensor (batch, time, `*`).
| forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/embedding.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/embedding.py | MIT |
def position_encoding(self,
offset: Union[int, torch.Tensor],
size: int) -> torch.Tensor:
""" For getting encoding in a streaming fashion
Attention!!!!!
we apply dropout only once at the whole utterance level in a none
streaming way, b... | For getting encoding in a streaming fashion
Attention!!!!!
we apply dropout only once at the whole utterance level in a none
streaming way, but will call this function several times with
increasing input size in a streaming scenario, so the dropout will
be applied several times... | position_encoding | python | abus-aikorea/voice-pro | cosyvoice/transformer/embedding.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/embedding.py | MIT |
def __init__(
self,
input_size: int,
output_size: int = 256,
attention_heads: int = 4,
linear_units: int = 2048,
num_blocks: int = 6,
dropout_rate: float = 0.1,
positional_dropout_rate: float = 0.1,
attention_dropout_rate: float = 0.0,
inpu... |
Args:
input_size (int): input dim
output_size (int): dimension of attention
attention_heads (int): the number of heads of multi head attention
linear_units (int): the hidden units number of position-wise feed
forward
num_blocks (int): ... | __init__ | python | abus-aikorea/voice-pro | cosyvoice/transformer/encoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/encoder.py | MIT |
def forward(
self,
xs: torch.Tensor,
xs_lens: torch.Tensor,
decoding_chunk_size: int = 0,
num_decoding_left_chunks: int = -1,
) -> Tuple[torch.Tensor, torch.Tensor]:
"""Embed positions in tensor.
Args:
xs: padded input tensor (B, T, D)
... | Embed positions in tensor.
Args:
xs: padded input tensor (B, T, D)
xs_lens: input length (B)
decoding_chunk_size: decoding chunk size for dynamic chunk
0: default for training, use random dynamic chunk.
<0: for decoding, use full chunk.
... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/encoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/encoder.py | MIT |
def forward_chunk(
self,
xs: torch.Tensor,
offset: int,
required_cache_size: int,
att_cache: torch.Tensor = torch.zeros(0, 0, 0, 0),
cnn_cache: torch.Tensor = torch.zeros(0, 0, 0, 0),
att_mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
) -> Tuple... | Forward just one chunk
Args:
xs (torch.Tensor): chunk input, with shape (b=1, time, mel-dim),
where `time == (chunk_size - 1) * subsample_rate + subsample.right_context + 1`
offset (int): current offset in encoder output time stamp
re... | forward_chunk | python | abus-aikorea/voice-pro | cosyvoice/transformer/encoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/encoder.py | MIT |
def forward_chunk_by_chunk(
self,
xs: torch.Tensor,
decoding_chunk_size: int,
num_decoding_left_chunks: int = -1,
) -> Tuple[torch.Tensor, torch.Tensor]:
""" Forward input chunk by chunk with chunk_size like a streaming
fashion
Here we should pay special ... | Forward input chunk by chunk with chunk_size like a streaming
fashion
Here we should pay special attention to computation cache in the
streaming style forward chunk by chunk. Three things should be taken
into account for computation in the current network:
1. transforme... | forward_chunk_by_chunk | python | abus-aikorea/voice-pro | cosyvoice/transformer/encoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/encoder.py | MIT |
def __init__(
self,
input_size: int,
output_size: int = 256,
attention_heads: int = 4,
linear_units: int = 2048,
num_blocks: int = 6,
dropout_rate: float = 0.1,
positional_dropout_rate: float = 0.1,
attention_dropout_rate: float = 0.0,
inpu... | Construct TransformerEncoder
See Encoder for the meaning of each parameter.
| __init__ | python | abus-aikorea/voice-pro | cosyvoice/transformer/encoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/encoder.py | MIT |
def __init__(
self,
input_size: int,
output_size: int = 256,
attention_heads: int = 4,
linear_units: int = 2048,
num_blocks: int = 6,
dropout_rate: float = 0.1,
positional_dropout_rate: float = 0.1,
attention_dropout_rate: float = 0.0,
inpu... | Construct ConformerEncoder
Args:
input_size to use_dynamic_chunk, see in BaseEncoder
positionwise_conv_kernel_size (int): Kernel size of positionwise
conv1d layer.
macaron_style (bool): Whether to use macaron style for
positionwise layer.
... | __init__ | python | abus-aikorea/voice-pro | cosyvoice/transformer/encoder.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/encoder.py | MIT |
def forward(self, x: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
"""Compute loss between x and target.
The model outputs and data labels tensors are flatten to
(batch*seqlen, class) shape and a mask is applied to the
padding part which should not be calculated for loss.
... | Compute loss between x and target.
The model outputs and data labels tensors are flatten to
(batch*seqlen, class) shape and a mask is applied to the
padding part which should not be calculated for loss.
Args:
x (torch.Tensor): prediction (batch, seqlen, class)
t... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/label_smoothing_loss.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/label_smoothing_loss.py | MIT |
def forward(self, xs: torch.Tensor) -> torch.Tensor:
"""Foward function.
Args:
xs: input tensor (B, L, D)
Returns:
output tensor, (B, L, D)
"""
B, L, D = xs.size(
) # batch size, sequence length, embedding dimension (idim)
xs = xs.view(-1... | Foward function.
Args:
xs: input tensor (B, L, D)
Returns:
output tensor, (B, L, D)
| forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/positionwise_feed_forward.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/positionwise_feed_forward.py | MIT |
def forward(
self,
x: torch.Tensor,
x_mask: torch.Tensor,
offset: Union[int, torch.Tensor] = 0
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Input x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): ... | Input x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): Input mask (#batch, 1, time).
Returns:
torch.Tensor: linear input tensor (#batch, time', odim),
where time' = time .
torch.Tensor: linear input mas... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/subsampling.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/subsampling.py | MIT |
def forward(
self,
x: torch.Tensor,
x_mask: torch.Tensor,
offset: Union[int, torch.Tensor] = 0
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Input x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): ... | Input x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): Input mask (#batch, 1, time).
Returns:
torch.Tensor: linear input tensor (#batch, time', odim),
where time' = time .
torch.Tensor: linear input mas... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/subsampling.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/subsampling.py | MIT |
def forward(
self,
x: torch.Tensor,
x_mask: torch.Tensor,
offset: Union[int, torch.Tensor] = 0
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Subsample x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tenso... | Subsample x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): Input mask (#batch, 1, time).
Returns:
torch.Tensor: Subsampled tensor (#batch, time', odim),
where time' = time // 2.
torch.Tensor: Subsampled... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/subsampling.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/subsampling.py | MIT |
def forward(
self,
x: torch.Tensor,
x_mask: torch.Tensor,
offset: Union[int, torch.Tensor] = 0
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Subsample x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tenso... | Subsample x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): Input mask (#batch, 1, time).
Returns:
torch.Tensor: Subsampled tensor (#batch, time', odim),
where time' = time // 4.
torch.Tensor: Subsampled... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/subsampling.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/subsampling.py | MIT |
def forward(
self,
x: torch.Tensor,
x_mask: torch.Tensor,
offset: Union[int, torch.Tensor] = 0
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Subsample x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor... | Subsample x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): Input mask (#batch, 1, time).
Returns:
torch.Tensor: Subsampled tensor (#batch, time', odim),
where time' = time // 6.
torch.Tensor: Subsampled ... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/subsampling.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/subsampling.py | MIT |
def forward(
self,
x: torch.Tensor,
x_mask: torch.Tensor,
offset: Union[int, torch.Tensor] = 0
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Subsample x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tenso... | Subsample x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): Input mask (#batch, 1, time).
Returns:
torch.Tensor: Subsampled tensor (#batch, time', odim),
where time' = time // 8.
torch.Tensor: Subsampled... | forward | python | abus-aikorea/voice-pro | cosyvoice/transformer/subsampling.py | https://github.com/abus-aikorea/voice-pro/blob/master/cosyvoice/transformer/subsampling.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.