code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):
"""Rezise the sample to ensure the given size. Keeps aspect ratio.
Args:
sample (dict): sample
size (tuple): image size
Returns:
tuple: new size
"""
shape = list(sample["disparity"].shape)
if ... | Rezise the sample to ensure the given size. Keeps aspect ratio.
Args:
sample (dict): sample
size (tuple): image size
Returns:
tuple: new size
| apply_min_size | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/data/transforms.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/data/transforms.py | MIT |
def __init__(
self,
width,
height,
resize_if_needed=False,
image_interpolation_method=cv2.INTER_AREA,
):
"""Init.
Args:
width (int): output width
height (int): output height
resize_if_needed (bool, optional): If True, sampl... | Init.
Args:
width (int): output width
height (int): output height
resize_if_needed (bool, optional): If True, sample might be upsampled to ensure
that a crop of size (width, height) is possbile. Defaults to False.
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/data/transforms.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/data/transforms.py | MIT |
def __init__(
self,
width,
height,
resize_target=True,
keep_aspect_ratio=False,
ensure_multiple_of=1,
resize_method="lower_bound",
image_interpolation_method=cv2.INTER_AREA,
letter_box=False,
):
"""Init.
Args:
width... | Init.
Args:
width (int): desired output width
height (int): desired output height
resize_target (bool, optional):
True: Resize the full sample (image, mask, target).
False: Resize image only.
Defaults to True.
keep_... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/data/transforms.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/data/transforms.py | MIT |
def __init__(self, max_val=1.0, use_mask=True):
"""Init.
Args:
max_val (float, optional): Max output value. Defaults to 1.0.
use_mask (bool, optional): Only operate on valid pixels (mask == True). Defaults to True.
"""
self.__max_val = max_val
self.__use_... | Init.
Args:
max_val (float, optional): Max output value. Defaults to 1.0.
use_mask (bool, optional): Only operate on valid pixels (mask == True). Defaults to True.
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/data/transforms.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/data/transforms.py | MIT |
def build_model(config) -> DepthModel:
"""Builds a model from a config. The model is specified by the model name and version in the config. The model is then constructed using the build_from_config function of the model interface.
This function should be used to construct models for training and evaluation.
... | Builds a model from a config. The model is specified by the model name and version in the config. The model is then constructed using the build_from_config function of the model interface.
This function should be used to construct models for training and evaluation.
Args:
config (dict): Config dict. Co... | build_model | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/builder.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/builder.py | MIT |
def _infer_with_pad_aug(self, x: torch.Tensor, pad_input: bool=True, fh: float=3, fw: float=3, upsampling_mode: str='bicubic', padding_mode="reflect", **kwargs) -> torch.Tensor:
"""
Inference interface for the model with padding augmentation
Padding augmentation fixes the boundary artifacts in t... |
Inference interface for the model with padding augmentation
Padding augmentation fixes the boundary artifacts in the output depth map.
Boundary artifacts are sometimes caused by the fact that the model is trained on NYU raw dataset which has a black or white border around the image.
Thi... | _infer_with_pad_aug | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/depth_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/depth_model.py | MIT |
def infer_with_flip_aug(self, x, pad_input: bool=True, **kwargs) -> torch.Tensor:
"""
Inference interface for the model with horizontal flip augmentation
Horizontal flip augmentation improves the accuracy of the model by averaging the output of the model with and without horizontal flip.
... |
Inference interface for the model with horizontal flip augmentation
Horizontal flip augmentation improves the accuracy of the model by averaging the output of the model with and without horizontal flip.
Args:
x (torch.Tensor): input tensor of shape (b, c, h, w)
pad_input... | infer_with_flip_aug | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/depth_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/depth_model.py | MIT |
def infer(self, x, pad_input: bool=True, with_flip_aug: bool=True, **kwargs) -> torch.Tensor:
"""
Inference interface for the model
Args:
x (torch.Tensor): input tensor of shape (b, c, h, w)
pad_input (bool, optional): whether to use padding augmentation. Defaults to True... |
Inference interface for the model
Args:
x (torch.Tensor): input tensor of shape (b, c, h, w)
pad_input (bool, optional): whether to use padding augmentation. Defaults to True.
with_flip_aug (bool, optional): whether to use horizontal flip augmentation. Defaults to Tr... | infer | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/depth_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/depth_model.py | MIT |
def infer_pil(self, pil_img, pad_input: bool=True, with_flip_aug: bool=True, output_type: str="numpy", **kwargs) -> Union[np.ndarray, PIL.Image.Image, torch.Tensor]:
"""
Inference interface for the model for PIL image
Args:
pil_img (PIL.Image.Image): input PIL image
pad_i... |
Inference interface for the model for PIL image
Args:
pil_img (PIL.Image.Image): input PIL image
pad_input (bool, optional): whether to use padding augmentation. Defaults to True.
with_flip_aug (bool, optional): whether to use horizontal flip augmentation. Defaults t... | infer_pil | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/depth_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/depth_model.py | MIT |
def load_state_dict(model, state_dict):
"""Load state_dict into model, handling DataParallel and DistributedDataParallel. Also checks for "model" key in state_dict.
DataParallel prefixes state_dict keys with 'module.' when saving.
If the model is not a DataParallel model but the state_dict is, then prefixe... | Load state_dict into model, handling DataParallel and DistributedDataParallel. Also checks for "model" key in state_dict.
DataParallel prefixes state_dict keys with 'module.' when saving.
If the model is not a DataParallel model but the state_dict is, then prefixes are removed.
If the model is a DataParall... | load_state_dict | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/model_io.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/model_io.py | MIT |
def load_state_from_resource(model, resource: str):
"""Loads weights to the model from a given resource. A resource can be of following types:
1. URL. Prefixed with "url::"
e.g. url::http(s)://url.resource.com/ckpt.pt
2. Local path. Prefixed with "local::"
e.g. local... | Loads weights to the model from a given resource. A resource can be of following types:
1. URL. Prefixed with "url::"
e.g. url::http(s)://url.resource.com/ckpt.pt
2. Local path. Prefixed with "local::"
e.g. local::/path/to/ckpt.pt
Args:
model (torch.nn.Modu... | load_state_from_resource | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/model_io.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/model_io.py | MIT |
def denormalize(x):
"""Reverses the imagenet normalization applied to the input.
Args:
x (torch.Tensor - shape(N,3,H,W)): input tensor
Returns:
torch.Tensor - shape(N,3,H,W): Denormalized input
"""
mean = torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1).to(x.device)
std = t... | Reverses the imagenet normalization applied to the input.
Args:
x (torch.Tensor - shape(N,3,H,W)): input tensor
Returns:
torch.Tensor - shape(N,3,H,W): Denormalized input
| denormalize | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/base_models/midas.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/base_models/midas.py | MIT |
def __init__(
self,
width,
height,
resize_target=True,
keep_aspect_ratio=False,
ensure_multiple_of=1,
resize_method="lower_bound",
):
"""Init.
Args:
width (int): desired output width
height (int): desired output height
... | Init.
Args:
width (int): desired output width
height (int): desired output height
resize_target (bool, optional):
True: Resize the full sample (image, mask, target).
False: Resize image only.
Defaults to True.
keep_a... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/base_models/midas.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/base_models/midas.py | MIT |
def __init__(self, midas, trainable=False, fetch_features=True, layer_names=('out_conv', 'l4_rn', 'r4', 'r3', 'r2', 'r1'), freeze_bn=False, keep_aspect_ratio=True,
img_size=384, **kwargs):
"""Midas Base model used for multi-scale feature extraction.
Args:
midas (torch.nn.Mo... | Midas Base model used for multi-scale feature extraction.
Args:
midas (torch.nn.Module): Midas model.
trainable (bool, optional): Train midas model. Defaults to False.
fetch_features (bool, optional): Extract multi-scale features. Defaults to True.
layer_names (t... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/base_models/midas.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/base_models/midas.py | MIT |
def __init__(self, in_features, n_bins, n_attractors=16, mlp_dim=128, min_depth=1e-3, max_depth=10,
alpha=300, gamma=2, kind='sum', attractor_type='exp', memory_efficient=False):
"""
Attractor layer for bin centers. Bin centers are bounded on the interval (min_depth, max_depth)
... |
Attractor layer for bin centers. Bin centers are bounded on the interval (min_depth, max_depth)
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/attractor.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/attractor.py | MIT |
def forward(self, x, b_prev, prev_b_embedding=None, interpolate=True, is_for_query=False):
"""
Args:
x (torch.Tensor) : feature block; shape - n, c, h, w
b_prev (torch.Tensor) : previous bin centers normed; shape - n, prev_nbins, h, w
Returns:
tuple(t... |
Args:
x (torch.Tensor) : feature block; shape - n, c, h, w
b_prev (torch.Tensor) : previous bin centers normed; shape - n, prev_nbins, h, w
Returns:
tuple(torch.Tensor,torch.Tensor) : new bin centers normed and scaled; shape - n, nbins, h, w
| forward | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/attractor.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/attractor.py | MIT |
def __init__(self, in_features, n_bins, n_attractors=16, mlp_dim=128, min_depth=1e-3, max_depth=10,
alpha=300, gamma=2, kind='sum', attractor_type='exp', memory_efficient=False):
"""
Attractor layer for bin centers. Bin centers are unbounded
"""
super().__init__()
... |
Attractor layer for bin centers. Bin centers are unbounded
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/attractor.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/attractor.py | MIT |
def forward(self, x, b_prev, prev_b_embedding=None, interpolate=True, is_for_query=False):
"""
Args:
x (torch.Tensor) : feature block; shape - n, c, h, w
b_prev (torch.Tensor) : previous bin centers normed; shape - n, prev_nbins, h, w
Returns:
tuple(t... |
Args:
x (torch.Tensor) : feature block; shape - n, c, h, w
b_prev (torch.Tensor) : previous bin centers normed; shape - n, prev_nbins, h, w
Returns:
tuple(torch.Tensor,torch.Tensor) : new bin centers unbounded; shape - n, nbins, h, w. Two outputs just to kee... | forward | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/attractor.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/attractor.py | MIT |
def __init__(self, n_classes=256, act=torch.softmax):
"""Compute log binomial distribution for n_classes
Args:
n_classes (int, optional): number of output classes. Defaults to 256.
"""
super().__init__()
self.K = n_classes
self.act = act
self.register... | Compute log binomial distribution for n_classes
Args:
n_classes (int, optional): number of output classes. Defaults to 256.
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/dist_layers.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/dist_layers.py | MIT |
def forward(self, x, t=1., eps=1e-4):
"""Compute log binomial distribution for x
Args:
x (torch.Tensor - NCHW): probabilities
t (float, torch.Tensor - NCHW, optional): Temperature of distribution. Defaults to 1..
eps (float, optional): Small number for numerical stab... | Compute log binomial distribution for x
Args:
x (torch.Tensor - NCHW): probabilities
t (float, torch.Tensor - NCHW, optional): Temperature of distribution. Defaults to 1..
eps (float, optional): Small number for numerical stability. Defaults to 1e-4.
Returns:
... | forward | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/dist_layers.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/dist_layers.py | MIT |
def __init__(self, in_features, condition_dim, n_classes=256, bottleneck_factor=2, p_eps=1e-4, max_temp=50, min_temp=1e-7, act=torch.softmax):
"""Conditional Log Binomial distribution
Args:
in_features (int): number of input channels in main feature
condition_dim (int): number o... | Conditional Log Binomial distribution
Args:
in_features (int): number of input channels in main feature
condition_dim (int): number of input channels in condition feature
n_classes (int, optional): Number of classes. Defaults to 256.
bottleneck_factor (int, optio... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/dist_layers.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/dist_layers.py | MIT |
def forward(self, x, cond):
"""Forward pass
Args:
x (torch.Tensor - NCHW): Main feature
cond (torch.Tensor - NCHW): condition feature
Returns:
torch.Tensor: Output log binomial distribution
"""
pt = self.mlp(torch.concat((x, cond), dim=1))
... | Forward pass
Args:
x (torch.Tensor - NCHW): Main feature
cond (torch.Tensor - NCHW): condition feature
Returns:
torch.Tensor: Output log binomial distribution
| forward | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/dist_layers.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/dist_layers.py | MIT |
def __init__(self, in_features, n_bins=16, mlp_dim=256, min_depth=1e-3, max_depth=10):
"""Bin center regressor network. Bin centers are bounded on (min_depth, max_depth) interval.
Args:
in_features (int): input channels
n_bins (int, optional): Number of bin centers. Defaults to ... | Bin center regressor network. Bin centers are bounded on (min_depth, max_depth) interval.
Args:
in_features (int): input channels
n_bins (int, optional): Number of bin centers. Defaults to 16.
mlp_dim (int, optional): Hidden dimension. Defaults to 256.
min_depth ... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/localbins_layers.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/localbins_layers.py | MIT |
def forward(self, x):
"""
Returns tensor of bin_width vectors (centers). One vector b for every pixel
"""
B = self._net(x)
eps = 1e-3
B = B + eps
B_widths_normed = B / B.sum(dim=1, keepdim=True)
B_widths = (self.max_depth - self.min_depth) * \
... |
Returns tensor of bin_width vectors (centers). One vector b for every pixel
| forward | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/localbins_layers.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/localbins_layers.py | MIT |
def __init__(self, in_features, n_bins=16, mlp_dim=256, min_depth=1e-3, max_depth=10):
"""Bin center regressor network. Bin centers are unbounded
Args:
in_features (int): input channels
n_bins (int, optional): Number of bin centers. Defaults to 16.
mlp_dim (int, opti... | Bin center regressor network. Bin centers are unbounded
Args:
in_features (int): input channels
n_bins (int, optional): Number of bin centers. Defaults to 16.
mlp_dim (int, optional): Hidden dimension. Defaults to 256.
min_depth (float, optional): Not used. (for ... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/localbins_layers.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/localbins_layers.py | MIT |
def forward(self, x):
"""
Returns tensor of bin_width vectors (centers). One vector b for every pixel
"""
B_centers = self._net(x)
return B_centers, B_centers |
Returns tensor of bin_width vectors (centers). One vector b for every pixel
| forward | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/localbins_layers.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/localbins_layers.py | MIT |
def __init__(self, in_features, out_features, mlp_dim=128):
"""Projector MLP
Args:
in_features (int): input channels
out_features (int): output channels
mlp_dim (int, optional): hidden dimension. Defaults to 128.
"""
super().__init__()
self._... | Projector MLP
Args:
in_features (int): input channels
out_features (int): output channels
mlp_dim (int, optional): hidden dimension. Defaults to 128.
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/localbins_layers.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/localbins_layers.py | MIT |
def forward(self, x, b_prev, prev_b_embedding=None, interpolate=True, is_for_query=False):
"""
x : feature block; shape - n, c, h, w
b_prev : previous bin widths normed; shape - n, prev_nbins, h, w
"""
if prev_b_embedding is not None:
if interpolate:
p... |
x : feature block; shape - n, c, h, w
b_prev : previous bin widths normed; shape - n, prev_nbins, h, w
| forward | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/localbins_layers.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/localbins_layers.py | MIT |
def __init__(self, in_channels, patch_size=10, embedding_dim=128, num_heads=4, use_class_token=False):
"""ViT-like transformer block
Args:
in_channels (int): Input channels
patch_size (int, optional): patch size. Defaults to 10.
embedding_dim (int, optional): Embeddi... | ViT-like transformer block
Args:
in_channels (int): Input channels
patch_size (int, optional): patch size. Defaults to 10.
embedding_dim (int, optional): Embedding dimension in transformer model. Defaults to 128.
num_heads (int, optional): number of attention hea... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/patch_transformer.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/patch_transformer.py | MIT |
def positional_encoding_1d(self, sequence_length, batch_size, embedding_dim, device='cpu'):
"""Generate positional encodings
Args:
sequence_length (int): Sequence length
embedding_dim (int): Embedding dimension
Returns:
torch.Tensor SBE: Positional encodings... | Generate positional encodings
Args:
sequence_length (int): Sequence length
embedding_dim (int): Embedding dimension
Returns:
torch.Tensor SBE: Positional encodings
| positional_encoding_1d | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/patch_transformer.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/patch_transformer.py | MIT |
def forward(self, x):
"""Forward pass
Args:
x (torch.Tensor - NCHW): Input feature tensor
Returns:
torch.Tensor - SNE: Transformer output embeddings. S - sequence length (=HW/patch_size^2), N - batch size, E - embedding dim
"""
embeddings = self.embeddin... | Forward pass
Args:
x (torch.Tensor - NCHW): Input feature tensor
Returns:
torch.Tensor - SNE: Transformer output embeddings. S - sequence length (=HW/patch_size^2), N - batch size, E - embedding dim
| forward | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/layers/patch_transformer.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/layers/patch_transformer.py | MIT |
def __init__(self, core, n_bins=64, bin_centers_type="softplus", bin_embedding_dim=128, min_depth=1e-3, max_depth=10,
n_attractors=[16, 8, 4, 1], attractor_alpha=300, attractor_gamma=2, attractor_kind='sum', attractor_type='exp', min_temp=5, max_temp=50, train_midas=True,
midas_lr_fac... | ZoeDepth model. This is the version of ZoeDepth that has a single metric head
Args:
core (models.base_models.midas.MidasCore): The base midas model that is used for extraction of "relative" features
n_bins (int, optional): Number of bin centers. Defaults to 64.
bin_centers_t... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/zoedepth/zoedepth_v1.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/zoedepth/zoedepth_v1.py | MIT |
def forward(self, x, return_final_centers=False, denorm=False, return_probs=False, **kwargs):
"""
Args:
x (torch.Tensor): Input image tensor of shape (B, C, H, W)
return_final_centers (bool, optional): Whether to return the final bin centers. Defaults to False.
denorm... |
Args:
x (torch.Tensor): Input image tensor of shape (B, C, H, W)
return_final_centers (bool, optional): Whether to return the final bin centers. Defaults to False.
denorm (bool, optional): Whether to denormalize the input image. This reverses ImageNet normalization as midas ... | forward | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/zoedepth/zoedepth_v1.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/zoedepth/zoedepth_v1.py | MIT |
def get_lr_params(self, lr):
"""
Learning rate configuration for different layers of the model
Args:
lr (float) : Base learning rate
Returns:
list : list of parameters to optimize and their learning rates, in the format required by torch optimizers.
"""
... |
Learning rate configuration for different layers of the model
Args:
lr (float) : Base learning rate
Returns:
list : list of parameters to optimize and their learning rates, in the format required by torch optimizers.
| get_lr_params | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/zoedepth/zoedepth_v1.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/zoedepth/zoedepth_v1.py | MIT |
def __init__(self, core, bin_conf, bin_centers_type="softplus", bin_embedding_dim=128,
n_attractors=[16, 8, 4, 1], attractor_alpha=300, attractor_gamma=2, attractor_kind='sum', attractor_type='exp',
min_temp=5, max_temp=50,
memory_efficient=False, train_midas=True,
... | ZoeDepthNK model. This is the version of ZoeDepth that has two metric heads and uses a learned router to route to experts.
Args:
core (models.base_models.midas.MidasCore): The base midas model that is used for extraction of "relative" features
bin_conf (List[dict]): A list of dictionar... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | MIT |
def forward(self, x, return_final_centers=False, denorm=False, return_probs=False, **kwargs):
"""
Args:
x (torch.Tensor): Input image tensor of shape (B, C, H, W). Assumes all images are from the same domain.
return_final_centers (bool, optional): Whether to return the final cent... |
Args:
x (torch.Tensor): Input image tensor of shape (B, C, H, W). Assumes all images are from the same domain.
return_final_centers (bool, optional): Whether to return the final centers of the attractors. Defaults to False.
denorm (bool, optional): Whether to denormalize the... | forward | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | MIT |
def get_lr_params(self, lr):
"""
Learning rate configuration for different layers of the model
Args:
lr (float) : Base learning rate
Returns:
list : list of parameters to optimize and their learning rates, in the format required by torch optimizers.
"""
... |
Learning rate configuration for different layers of the model
Args:
lr (float) : Base learning rate
Returns:
list : list of parameters to optimize and their learning rates, in the format required by torch optimizers.
| get_lr_params | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | MIT |
def get_conf_parameters(self, conf_name):
"""
Returns parameters of all the ModuleDicts children that are exclusively used for the given bin configuration
"""
params = []
for name, child in self.named_children():
if isinstance(child, nn.ModuleDict):
fo... |
Returns parameters of all the ModuleDicts children that are exclusively used for the given bin configuration
| get_conf_parameters | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | MIT |
def freeze_conf(self, conf_name):
"""
Freezes all the parameters of all the ModuleDicts children that are exclusively used for the given bin configuration
"""
for p in self.get_conf_parameters(conf_name):
p.requires_grad = False |
Freezes all the parameters of all the ModuleDicts children that are exclusively used for the given bin configuration
| freeze_conf | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | MIT |
def unfreeze_conf(self, conf_name):
"""
Unfreezes all the parameters of all the ModuleDicts children that are exclusively used for the given bin configuration
"""
for p in self.get_conf_parameters(conf_name):
p.requires_grad = True |
Unfreezes all the parameters of all the ModuleDicts children that are exclusively used for the given bin configuration
| unfreeze_conf | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | MIT |
def freeze_all_confs(self):
"""
Freezes all the parameters of all the ModuleDicts children
"""
for name, child in self.named_children():
if isinstance(child, nn.ModuleDict):
for bin_conf_name, module in child.items():
for p in module.parame... |
Freezes all the parameters of all the ModuleDicts children
| freeze_all_confs | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/models/zoedepth_nk/zoedepth_nk_v1.py | MIT |
def __init__(self, config, model, train_loader, test_loader=None, device=None):
""" Base Trainer class for training a model."""
self.config = config
self.metric_criterion = "abs_rel"
if device is None:
device = torch.device(
'cuda') if torch.cuda.is_a... | Base Trainer class for training a model. | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/trainers/base_trainer.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/trainers/base_trainer.py | MIT |
def get_trainer(config):
"""Builds and returns a trainer based on the config.
Args:
config (dict): the config dict (typically constructed using utils.config.get_config)
config.trainer (str): the name of the trainer to use. The module named "{config.trainer}_trainer" must exist in trainers r... | Builds and returns a trainer based on the config.
Args:
config (dict): the config dict (typically constructed using utils.config.get_config)
config.trainer (str): the name of the trainer to use. The module named "{config.trainer}_trainer" must exist in trainers root module
Raises:
... | get_trainer | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/trainers/builder.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/trainers/builder.py | MIT |
def __call__(self, prob, gt):
"""
:param prob: ordinal regression probability, N x 2*Ord Num x H x W, torch.Tensor
:param gt: depth ground truth, NXHxW, torch.Tensor
:return: loss: loss value, torch.float
"""
# N, C, H, W = prob.shape
valid_mask = gt > 0.
... |
:param prob: ordinal regression probability, N x 2*Ord Num x H x W, torch.Tensor
:param gt: depth ground truth, NXHxW, torch.Tensor
:return: loss: loss value, torch.float
| __call__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/trainers/loss.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/trainers/loss.py | MIT |
def train_on_batch(self, batch, train_step):
"""
Expects a batch of images and depth as input
batch["image"].shape : batch_size, c, h, w
batch["depth"].shape : batch_size, 1, h, w
Assumes all images in a batch are from the same dataset
"""
images, depths_gt = ba... |
Expects a batch of images and depth as input
batch["image"].shape : batch_size, c, h, w
batch["depth"].shape : batch_size, 1, h, w
Assumes all images in a batch are from the same dataset
| train_on_batch | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/trainers/zoedepth_nk_trainer.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/trainers/zoedepth_nk_trainer.py | MIT |
def train_on_batch(self, batch, train_step):
"""
Expects a batch of images and depth as input
batch["image"].shape : batch_size, c, h, w
batch["depth"].shape : batch_size, 1, h, w
"""
images, depths_gt = batch['image'].to(
self.device), batch['depth'].to(self... |
Expects a batch of images and depth as input
batch["image"].shape : batch_size, c, h, w
batch["depth"].shape : batch_size, 1, h, w
| train_on_batch | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/trainers/zoedepth_trainer.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/trainers/zoedepth_trainer.py | MIT |
def split_combined_args(kwargs):
"""Splits the arguments that are combined with '__' into multiple arguments.
Combined arguments should have equal number of keys and values.
Keys are separated by '__' and Values are separated with ';'.
For example, '__n_bins__lr=256;0.001'
Args:
kw... | Splits the arguments that are combined with '__' into multiple arguments.
Combined arguments should have equal number of keys and values.
Keys are separated by '__' and Values are separated with ';'.
For example, '__n_bins__lr=256;0.001'
Args:
kwargs (dict): key-value pairs of argument... | split_combined_args | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/utils/config.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/utils/config.py | MIT |
def parse_list(config, key, dtype=int):
"""Parse a list of values for the key if the value is a string. The values are separated by a comma.
Modifies the config in place.
"""
if key in config:
if isinstance(config[key], str):
config[key] = list(map(dtype, config[key].split(',')))
... | Parse a list of values for the key if the value is a string. The values are separated by a comma.
Modifies the config in place.
| parse_list | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/utils/config.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/utils/config.py | MIT |
def get_model_config(model_name, model_version=None):
"""Find and parse the .json config file for the model.
Args:
model_name (str): name of the model. The config file should be named config_{model_name}[_{model_version}].json under the models/{model_name} directory.
model_version (str, optiona... | Find and parse the .json config file for the model.
Args:
model_name (str): name of the model. The config file should be named config_{model_name}[_{model_version}].json under the models/{model_name} directory.
model_version (str, optional): Specific config version. If specified config_{model_name}... | get_model_config | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/utils/config.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/utils/config.py | MIT |
def get_config(model_name, mode='train', dataset=None, **overwrite_kwargs):
"""Main entry point to get the config for the model.
Args:
model_name (str): name of the desired model.
mode (str, optional): "train" or "infer". Defaults to 'train'.
dataset (str, optional): If specified, the c... | Main entry point to get the config for the model.
Args:
model_name (str): name of the desired model.
mode (str, optional): "train" or "infer". Defaults to 'train'.
dataset (str, optional): If specified, the corresponding dataset configuration is loaded as well. Defaults to None.
Ke... | get_config | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/utils/config.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/utils/config.py | MIT |
def get_intrinsics(H,W):
"""
Intrinsics for a pinhole camera model.
Assume fov of 55 degrees and central principal point.
"""
f = 0.5 * W / np.tan(0.5 * 55 * np.pi / 180.0)
cx = 0.5 * W
cy = 0.5 * H
return np.array([[f, 0, cx],
[0, f, cy],
[0, 0,... |
Intrinsics for a pinhole camera model.
Assume fov of 55 degrees and central principal point.
| get_intrinsics | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/utils/geometry.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/utils/geometry.py | MIT |
def create_triangles(h, w, mask=None):
"""
Reference: https://github.com/google-research/google-research/blob/e96197de06613f1b027d20328e06d69829fa5a89/infinite_nature/render_utils.py#L68
Creates mesh triangle indices from a given pixel grid size.
This function is not and need not be differentiable a... |
Reference: https://github.com/google-research/google-research/blob/e96197de06613f1b027d20328e06d69829fa5a89/infinite_nature/render_utils.py#L68
Creates mesh triangle indices from a given pixel grid size.
This function is not and need not be differentiable as triangle indices are
fixed.
Args... | create_triangles | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/utils/geometry.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/utils/geometry.py | MIT |
def denormalize(x):
"""Reverses the imagenet normalization applied to the input.
Args:
x (torch.Tensor - shape(N,3,H,W)): input tensor
Returns:
torch.Tensor - shape(N,3,H,W): Denormalized input
"""
mean = torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1).to(x.device)
std = t... | Reverses the imagenet normalization applied to the input.
Args:
x (torch.Tensor - shape(N,3,H,W)): input tensor
Returns:
torch.Tensor - shape(N,3,H,W): Denormalized input
| denormalize | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/utils/misc.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/utils/misc.py | MIT |
def colorize(value, vmin=None, vmax=None, cmap='gray_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None):
"""Converts a depth map to a color image.
Args:
value (torch.Tensor, numpy.ndarry): Input depth map. Shape: (H, W) or (1, H, ... | Converts a depth map to a color image.
Args:
value (torch.Tensor, numpy.ndarry): Input depth map. Shape: (H, W) or (1, H, W) or (1, 1, H, W). All singular dimensions are squeezed
vmin (float, optional): vmin-valued entries are mapped to start color of cmap. If None, value.min() is used. Defaults to... | colorize | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/utils/misc.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/utils/misc.py | MIT |
def compute_errors(gt, pred):
"""Compute metrics for 'pred' compared to 'gt'
Args:
gt (numpy.ndarray): Ground truth values
pred (numpy.ndarray): Predicted values
gt.shape should be equal to pred.shape
Returns:
dict: Dictionary containing the following metrics:
... | Compute metrics for 'pred' compared to 'gt'
Args:
gt (numpy.ndarray): Ground truth values
pred (numpy.ndarray): Predicted values
gt.shape should be equal to pred.shape
Returns:
dict: Dictionary containing the following metrics:
'a1': Delta1 accuracy: Fraction of pi... | compute_errors | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/utils/misc.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/utils/misc.py | MIT |
def compute_metrics(gt, pred, interpolate=True, garg_crop=False, eigen_crop=True, dataset='nyu', min_depth_eval=0.1, max_depth_eval=10, **kwargs):
"""Compute metrics of predicted depth maps. Applies cropping and masking as necessary or specified via arguments. Refer to compute_errors for more details on metrics.
... | Compute metrics of predicted depth maps. Applies cropping and masking as necessary or specified via arguments. Refer to compute_errors for more details on metrics.
| compute_metrics | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/utils/misc.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/utils/misc.py | MIT |
def resize_depth(depth, width, height):
"""Resize numpy (or image read by imageio) depth map
Args:
depth (numpy): depth
width (int): image width
height (int): image height
Returns:
array: processed depth
"""
depth = cv2.blur(depth, (3, 3))
return cv2.resize(dept... | Resize numpy (or image read by imageio) depth map
Args:
depth (numpy): depth
width (int): image width
height (int): image height
Returns:
array: processed depth
| resize_depth | python | thygate/stable-diffusion-webui-depthmap-script | inpaint/boostmonodepth_utils.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/inpaint/boostmonodepth_utils.py | MIT |
def update_status(mesh, info_on_pix, depth=None):
'''
(2) clear_node_feat(G, *fts) : Clear all the node feature on graph G.
(6) get_cross_nes(x, y) : Get the four cross neighbors of pixel(x, y).
'''
key_exist = lambda d, k: d.get(k) is not None
is_inside = lambda x, y, xmin, xmax, ymin, ymax: xm... |
(2) clear_node_feat(G, *fts) : Clear all the node feature on graph G.
(6) get_cross_nes(x, y) : Get the four cross neighbors of pixel(x, y).
| update_status | python | thygate/stable-diffusion-webui-depthmap-script | inpaint/mesh.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/inpaint/mesh.py | MIT |
def group_edges(LDI, config, image, remove_conflict_ordinal, spdb=False):
'''
(1) add_new_node(G, node) : add "node" to graph "G"
(2) add_new_edge(G, node_a, node_b) : add edge "node_a--node_b" to graph "G"
(3) exceed_thre(x, y, thre) : Check if difference between "x" and "y" exceed threshold "thre"
... |
(1) add_new_node(G, node) : add "node" to graph "G"
(2) add_new_edge(G, node_a, node_b) : add edge "node_a--node_b" to graph "G"
(3) exceed_thre(x, y, thre) : Check if difference between "x" and "y" exceed threshold "thre"
(4) key_exist(d, k) : Check if key "k' exists in dictionary "d"
(5) comm_opp... | group_edges | python | thygate/stable-diffusion-webui-depthmap-script | inpaint/mesh.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/inpaint/mesh.py | MIT |
def init_weights(self, init_type='normal', gain=0.02):
'''
initialize network's weights
init_type: normal | xavier | kaiming | orthogonal
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/9451e70673400885567d08a9e97ade2524c700d0/models/networks.py#L39
'''
def ... |
initialize network's weights
init_type: normal | xavier | kaiming | orthogonal
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/9451e70673400885567d08a9e97ade2524c700d0/models/networks.py#L39
| init_weights | python | thygate/stable-diffusion-webui-depthmap-script | inpaint/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/inpaint/networks.py | MIT |
def train(self, mode=True):
"""
Override the default train() to freeze the BN parameters
"""
super().train(mode)
if self.freeze_enc_bn:
for name, module in self.named_modules():
if isinstance(module, nn.BatchNorm2d) and 'enc' in name:
... |
Override the default train() to freeze the BN parameters
| train | python | thygate/stable-diffusion-webui-depthmap-script | inpaint/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/inpaint/networks.py | MIT |
def get_func(func_name):
"""Helper to return a function object by name. func_name must identify a
function in this module or the path to a function relative to the base
'modeling' module.
"""
if func_name == '':
return None
try:
parts = func_name.split('.')
# Refers to a ... | Helper to return a function object by name. func_name must identify a
function in this module or the path to a function relative to the base
'modeling' module.
| get_func | python | thygate/stable-diffusion-webui-depthmap-script | lib/net_tools.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/lib/net_tools.py | MIT |
def resnext101_32x8d(pretrained=True, **kwargs):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
kwargs['groups'] = 32
kwargs['width_per_group'] = 8
model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
return model | Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
| resnext101_32x8d | python | thygate/stable-diffusion-webui-depthmap-script | lib/Resnext_torch.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/lib/Resnext_torch.py | MIT |
def reconstruct_3D(depth, f):
"""
Reconstruct depth to 3D pointcloud with the provided focal length.
Return:
pcd: N X 3 array, point cloud
"""
cu = depth.shape[1] / 2
cv = depth.shape[0] / 2
width = depth.shape[1]
height = depth.shape[0]
row = np.arange(0, width, 1)
u = n... |
Reconstruct depth to 3D pointcloud with the provided focal length.
Return:
pcd: N X 3 array, point cloud
| reconstruct_3D | python | thygate/stable-diffusion-webui-depthmap-script | lib/test_utils.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/lib/test_utils.py | MIT |
def save_point_cloud(pcd, rgb, filename, binary=True):
"""Save an RGB point cloud as a PLY file.
:paras
@pcd: Nx3 matrix, the XYZ coordinates
@rgb: NX3 matrix, the rgb colors for each 3D point
"""
assert pcd.shape[0] == rgb.shape[0]
if rgb is None:
gray_concat = np.tile(np.arra... | Save an RGB point cloud as a PLY file.
:paras
@pcd: Nx3 matrix, the XYZ coordinates
@rgb: NX3 matrix, the rgb colors for each 3D point
| save_point_cloud | python | thygate/stable-diffusion-webui-depthmap-script | lib/test_utils.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/lib/test_utils.py | MIT |
def reconstruct_depth(depth, rgb, dir, pcd_name, focal):
"""
para disp: disparity, [h, w]
para rgb: rgb image, [h, w, 3], in rgb format
"""
rgb = np.squeeze(rgb)
depth = np.squeeze(depth)
mask = depth < 1e-8
depth[mask] = 0
depth = depth / depth.max() * 10000
pcd = reconstruct_... |
para disp: disparity, [h, w]
para rgb: rgb image, [h, w, 3], in rgb format
| reconstruct_depth | python | thygate/stable-diffusion-webui-depthmap-script | lib/test_utils.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/lib/test_utils.py | MIT |
def __init__(self, opt):
"""Initialize the class; save the options in the class
Parameters:
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
"""
self.opt = opt
self.root = opt.dataroot | Initialize the class; save the options in the class
Parameters:
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/data/base_dataset.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/data/base_dataset.py | MIT |
def __print_size_warning(ow, oh, w, h):
"""Print warning information about image size(only print once)"""
if not hasattr(__print_size_warning, 'has_printed'):
print("The image size needs to be a multiple of 4. "
"The loaded image size was (%d, %d), so it was adjusted to "
"(%... | Print warning information about image size(only print once) | __print_size_warning | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/data/base_dataset.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/data/base_dataset.py | MIT |
def find_dataset_using_name(dataset_name):
"""Import the module "data/[dataset_name]_dataset.py".
In the file, the class called DatasetNameDataset() will
be instantiated. It has to be a subclass of BaseDataset,
and it is case-insensitive.
"""
dataset_filename = "pix2pix.data." + dataset_name + ... | Import the module "data/[dataset_name]_dataset.py".
In the file, the class called DatasetNameDataset() will
be instantiated. It has to be a subclass of BaseDataset,
and it is case-insensitive.
| find_dataset_using_name | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/data/__init__.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/data/__init__.py | MIT |
def create_dataset(opt):
"""Create a dataset given the option.
This function wraps the class CustomDatasetDataLoader.
This is the main interface between this package and 'train.py'/'test.py'
Example:
>>> from data import create_dataset
>>> dataset = create_dataset(opt)
"""
... | Create a dataset given the option.
This function wraps the class CustomDatasetDataLoader.
This is the main interface between this package and 'train.py'/'test.py'
Example:
>>> from data import create_dataset
>>> dataset = create_dataset(opt)
| create_dataset | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/data/__init__.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/data/__init__.py | MIT |
def __init__(self, opt):
"""Initialize this class
Step 1: create a dataset instance given the name [dataset_mode]
Step 2: create a multi-threaded data loader.
"""
self.opt = opt
dataset_class = find_dataset_using_name(opt.dataset_mode)
self.dataset = dataset_clas... | Initialize this class
Step 1: create a dataset instance given the name [dataset_mode]
Step 2: create a multi-threaded data loader.
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/data/__init__.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/data/__init__.py | MIT |
def __init__(self, opt):
"""Initialize the BaseModel class.
Parameters:
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
When creating your custom class, you need to implement your own initialization.
In this function, you should f... | Initialize the BaseModel class.
Parameters:
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
When creating your custom class, you need to implement your own initialization.
In this function, you should first call <BaseModel.__init__(self, ... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/base_model.py | MIT |
def setup(self, opt):
"""Load and print networks; create schedulers
Parameters:
opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
"""
if self.isTrain:
self.schedulers = [networks.get_scheduler(optimizer, opt) for optimiz... | Load and print networks; create schedulers
Parameters:
opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
| setup | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/base_model.py | MIT |
def eval(self):
"""Make models eval mode during test time"""
for name in self.model_names:
if isinstance(name, str):
net = getattr(self, 'net' + name)
net.eval() | Make models eval mode during test time | eval | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/base_model.py | MIT |
def update_learning_rate(self):
"""Update learning rates for all the networks; called at the end of every epoch"""
old_lr = self.optimizers[0].param_groups[0]['lr']
for scheduler in self.schedulers:
if self.opt.lr_policy == 'plateau':
scheduler.step(self.metric)
... | Update learning rates for all the networks; called at the end of every epoch | update_learning_rate | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/base_model.py | MIT |
def get_current_visuals(self):
"""Return visualization images. train.py will display these images with visdom, and save the images to a HTML"""
visual_ret = OrderedDict()
for name in self.visual_names:
if isinstance(name, str):
visual_ret[name] = getattr(self, name)
... | Return visualization images. train.py will display these images with visdom, and save the images to a HTML | get_current_visuals | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/base_model.py | MIT |
def get_current_losses(self):
"""Return traning losses / errors. train.py will print out these errors on console, and save them to a file"""
errors_ret = OrderedDict()
for name in self.loss_names:
if isinstance(name, str):
errors_ret[name] = float(getattr(self, 'loss_... | Return traning losses / errors. train.py will print out these errors on console, and save them to a file | get_current_losses | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/base_model.py | MIT |
def save_networks(self, epoch):
"""Save all the networks to the disk.
Parameters:
epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
"""
for name in self.model_names:
if isinstance(name, str):
save_filename = '%s_n... | Save all the networks to the disk.
Parameters:
epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
| save_networks | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/base_model.py | MIT |
def __patch_instance_norm_state_dict(self, state_dict, module, keys, i=0):
"""Fix InstanceNorm checkpoints incompatibility (prior to 0.4)"""
key = keys[i]
if i + 1 == len(keys): # at the end, pointing to a parameter/buffer
if module.__class__.__name__.startswith('InstanceNorm') and ... | Fix InstanceNorm checkpoints incompatibility (prior to 0.4) | __patch_instance_norm_state_dict | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/base_model.py | MIT |
def load_networks(self, epoch):
"""Load all the networks from the disk.
Parameters:
epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
"""
for name in self.model_names:
if isinstance(name, str):
load_filename = '%s... | Load all the networks from the disk.
Parameters:
epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
| load_networks | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/base_model.py | MIT |
def print_networks(self, verbose):
"""Print the total number of parameters in the network and (if verbose) network architecture
Parameters:
verbose (bool) -- if verbose: print the network architecture
"""
print('---------- Networks initialized -------------')
for nam... | Print the total number of parameters in the network and (if verbose) network architecture
Parameters:
verbose (bool) -- if verbose: print the network architecture
| print_networks | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/base_model.py | MIT |
def set_requires_grad(self, nets, requires_grad=False):
"""Set requies_grad=Fasle for all the networks to avoid unnecessary computations
Parameters:
nets (network list) -- a list of networks
requires_grad (bool) -- whether the networks require gradients or not
"""
... | Set requies_grad=Fasle for all the networks to avoid unnecessary computations
Parameters:
nets (network list) -- a list of networks
requires_grad (bool) -- whether the networks require gradients or not
| set_requires_grad | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/base_model.py | MIT |
def get_norm_layer(norm_type='instance'):
"""Return a normalization layer
Parameters:
norm_type (str) -- the name of the normalization layer: batch | instance | none
For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev).
For InstanceNorm, we do not use le... | Return a normalization layer
Parameters:
norm_type (str) -- the name of the normalization layer: batch | instance | none
For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev).
For InstanceNorm, we do not use learnable affine parameters. We do not track runnin... | get_norm_layer | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def init_weights(net, init_type='normal', init_gain=0.02):
"""Initialize network weights.
Parameters:
net (network) -- network to be initialized
init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
init_gain (float) -- scaling factor for n... | Initialize network weights.
Parameters:
net (network) -- network to be initialized
init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
init_gain (float) -- scaling factor for normal, xavier and orthogonal.
We use 'normal' in the original... | init_weights | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]):
"""Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights
Parameters:
net (network) -- the network to be initialized
init_type (str) -- the name of an initialization m... | Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights
Parameters:
net (network) -- the network to be initialized
init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
gain (float) -... | init_net | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def define_G(input_nc, output_nc, ngf, netG, norm='batch', use_dropout=False, init_type='normal', init_gain=0.02, gpu_ids=[]):
"""Create a generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
ngf (int) ... | Create a generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
ngf (int) -- the number of filters in the last conv layer
netG (str) -- the architecture's name: resnet_9blocks | resnet_6blocks | unet_... | define_G | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def __init__(self, gan_mode, target_real_label=1.0, target_fake_label=0.0):
""" Initialize the GANLoss class.
Parameters:
gan_mode (str) - - the type of GAN objective. It currently supports vanilla, lsgan, and wgangp.
target_real_label (bool) - - label for a real image
... | Initialize the GANLoss class.
Parameters:
gan_mode (str) - - the type of GAN objective. It currently supports vanilla, lsgan, and wgangp.
target_real_label (bool) - - label for a real image
target_fake_label (bool) - - label of a fake image
Note: Do not use sigmoid... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def get_target_tensor(self, prediction, target_is_real):
"""Create label tensors with the same size as the input.
Parameters:
prediction (tensor) - - tpyically the prediction from a discriminator
target_is_real (bool) - - if the ground truth label is for real images or fake imag... | Create label tensors with the same size as the input.
Parameters:
prediction (tensor) - - tpyically the prediction from a discriminator
target_is_real (bool) - - if the ground truth label is for real images or fake images
Returns:
A label tensor filled with ground t... | get_target_tensor | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def __call__(self, prediction, target_is_real):
"""Calculate loss given Discriminator's output and grount truth labels.
Parameters:
prediction (tensor) - - tpyically the prediction output from a discriminator
target_is_real (bool) - - if the ground truth label is for real images... | Calculate loss given Discriminator's output and grount truth labels.
Parameters:
prediction (tensor) - - tpyically the prediction output from a discriminator
target_is_real (bool) - - if the ground truth label is for real images or fake images
Returns:
the calculate... | __call__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def cal_gradient_penalty(netD, real_data, fake_data, device, type='mixed', constant=1.0, lambda_gp=10.0):
"""Calculate the gradient penalty loss, used in WGAN-GP paper https://arxiv.org/abs/1704.00028
Arguments:
netD (network) -- discriminator network
real_data (tensor array) --... | Calculate the gradient penalty loss, used in WGAN-GP paper https://arxiv.org/abs/1704.00028
Arguments:
netD (network) -- discriminator network
real_data (tensor array) -- real images
fake_data (tensor array) -- generated images from the generator
device (str) ... | cal_gradient_penalty | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect'):
"""Construct a Resnet-based generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of... | Construct a Resnet-based generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
ngf (int) -- the number of filters in the last conv layer
norm_layer -- ... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def __init__(self, dim, padding_type, norm_layer, use_dropout, use_bias):
"""Initialize the Resnet block
A resnet block is a conv block with skip connections
We construct a conv block with build_conv_block function,
and implement skip connections in <forward> function.
Original ... | Initialize the Resnet block
A resnet block is a conv block with skip connections
We construct a conv block with build_conv_block function,
and implement skip connections in <forward> function.
Original Resnet paper: https://arxiv.org/pdf/1512.03385.pdf
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def build_conv_block(self, dim, padding_type, norm_layer, use_dropout, use_bias):
"""Construct a convolutional block.
Parameters:
dim (int) -- the number of channels in the conv layer.
padding_type (str) -- the name of padding layer: reflect | replicate | zero
... | Construct a convolutional block.
Parameters:
dim (int) -- the number of channels in the conv layer.
padding_type (str) -- the name of padding layer: reflect | replicate | zero
norm_layer -- normalization layer
use_dropout (bool) -- if use dro... | build_conv_block | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def __init__(self, input_nc, output_nc, num_downs, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
num... | Construct a Unet generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7,
... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def __init__(self, outer_nc, inner_nc, input_nc=None,
submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet submodule with skip connections.
Parameters:
outer_nc (int) -- the number of filters in the outer conv ... | Construct a Unet submodule with skip connections.
Parameters:
outer_nc (int) -- the number of filters in the outer conv layer
inner_nc (int) -- the number of filters in the inner conv layer
input_nc (int) -- the number of channels in input images/features
submodu... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d):
"""Construct a PatchGAN discriminator
Parameters:
input_nc (int) -- the number of channels in input images
ndf (int) -- the number of filters in the last conv layer
n_layers (int) --... | Construct a PatchGAN discriminator
Parameters:
input_nc (int) -- the number of channels in input images
ndf (int) -- the number of filters in the last conv layer
n_layers (int) -- the number of conv layers in the discriminator
norm_layer -- normaliza... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def __init__(self, input_nc, ndf=64, norm_layer=nn.BatchNorm2d):
"""Construct a 1x1 PatchGAN discriminator
Parameters:
input_nc (int) -- the number of channels in input images
ndf (int) -- the number of filters in the last conv layer
norm_layer -- normali... | Construct a 1x1 PatchGAN discriminator
Parameters:
input_nc (int) -- the number of channels in input images
ndf (int) -- the number of filters in the last conv layer
norm_layer -- normalization layer
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/networks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/networks.py | MIT |
def modify_commandline_options(parser, is_train=True):
"""Add new dataset-specific options, and rewrite default values for existing options.
Parameters:
parser -- original option parser
is_train (bool) -- whether training phase or test phase. You can use this flag to ad... | Add new dataset-specific options, and rewrite default values for existing options.
Parameters:
parser -- original option parser
is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
Returns:
... | modify_commandline_options | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/pix2pix4depth_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/pix2pix4depth_model.py | MIT |
def __init__(self, opt):
"""Initialize the pix2pix class.
Parameters:
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
"""
BaseModel.__init__(self, opt)
# specify the training losses you want to print out. The training/test ... | Initialize the pix2pix class.
Parameters:
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/pix2pix4depth_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/pix2pix4depth_model.py | MIT |
def backward_D(self):
"""Calculate GAN loss for the discriminator"""
# Fake; stop backprop to the generator by detaching fake_B
fake_AB = torch.cat((self.real_A, self.fake_B), 1) # we use conditional GANs; we need to feed both input and output to the discriminator
pred_fake = self.netD(... | Calculate GAN loss for the discriminator | backward_D | python | thygate/stable-diffusion-webui-depthmap-script | pix2pix/models/pix2pix4depth_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/pix2pix/models/pix2pix4depth_model.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.