code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def marginal_log_mean_coeff(self, t):
"""
Compute log(alpha_t) of a given continuous-time label t in [0, T].
"""
if self.schedule == 'discrete':
return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device),
self.log_alpha_array.to(t.de... |
Compute log(alpha_t) of a given continuous-time label t in [0, T].
| marginal_log_mean_coeff | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def marginal_lambda(self, t):
"""
Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T].
"""
log_mean_coeff = self.marginal_log_mean_coeff(t)
log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff))
return log_mean_coeff - log_s... |
Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T].
| marginal_lambda | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def inverse_lambda(self, lamb):
"""
Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t.
"""
if self.schedule == 'linear':
tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb))
Delta = self.beta_... |
Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t.
| inverse_lambda | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def model_wrapper(
model,
noise_schedule,
model_type="noise",
model_kwargs={},
guidance_type="uncond",
condition=None,
unconditional_condition=None,
guidance_scale=1.,
classifier_fn=None,
classifier_kwargs={},
):
"""Create a wrapper fun... | Create a wrapper function for the noise prediction model.
DPM-Solver needs to solve the continuous-time diffusion ODEs. For DPMs trained on discrete-time labels, we need to
firstly wrap the model function to a noise prediction model that accepts the continuous time as the input.
We support four types of the... | model_wrapper | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def get_model_input_time(t_continuous):
"""
Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time.
For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N].
For continuous-time DPMs, we just use `t_continuous`.... |
Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time.
For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N].
For continuous-time DPMs, we just use `t_continuous`.
| get_model_input_time | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def cond_grad_fn(x, t_input):
"""
Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t).
"""
with torch.enable_grad():
x_in = x.detach().requires_grad_(True)
log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs)
r... |
Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t).
| cond_grad_fn | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def model_fn(x, t_continuous):
"""
The noise predicition model function that is used for DPM-Solver.
"""
if t_continuous.reshape((-1,)).shape[0] == 1:
t_continuous = t_continuous.expand((x.shape[0]))
if guidance_type == "uncond":
return noise_pred_fn(x, t_... |
The noise predicition model function that is used for DPM-Solver.
| model_fn | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def __init__(self, model_fn, noise_schedule, predict_x0=False, thresholding=False, max_val=1.):
"""Construct a DPM-Solver.
We support both the noise prediction model ("predicting epsilon") and the data prediction model ("predicting x0").
If `predict_x0` is False, we use the solver for the noise ... | Construct a DPM-Solver.
We support both the noise prediction model ("predicting epsilon") and the data prediction model ("predicting x0").
If `predict_x0` is False, we use the solver for the noise prediction model (DPM-Solver).
If `predict_x0` is True, we use the solver for the data prediction m... | __init__ | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def data_prediction_fn(self, x, t):
"""
Return the data prediction model (with thresholding).
"""
noise = self.noise_prediction_fn(x, t)
dims = x.dim()
alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t)
x0 = (x - expand_d... |
Return the data prediction model (with thresholding).
| data_prediction_fn | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def model_fn(self, x, t):
"""
Convert the model to the noise prediction model or the data prediction model.
"""
if self.predict_x0:
return self.data_prediction_fn(x, t)
else:
return self.noise_prediction_fn(x, t) |
Convert the model to the noise prediction model or the data prediction model.
| model_fn | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def get_time_steps(self, skip_type, t_T, t_0, N, device):
"""Compute the intermediate time steps for sampling.
Args:
skip_type: A `str`. The type for the spacing of the time steps. We support three types:
- 'logSNR': uniform logSNR for the time steps.
- 'time_... | Compute the intermediate time steps for sampling.
Args:
skip_type: A `str`. The type for the spacing of the time steps. We support three types:
- 'logSNR': uniform logSNR for the time steps.
- 'time_uniform': uniform time for the time steps. (**Recommended for high-re... | get_time_steps | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device):
"""
Get the order of each step for sampling by the singlestep DPM-Solver.
We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast".
Given ... |
Get the order of each step for sampling by the singlestep DPM-Solver.
We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast".
Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is:
- I... | get_orders_and_timesteps_for_singlestep_solver | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def dpm_solver_first_update(self, x, s, t, model_s=None, return_intermediate=False):
"""
DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`.
Args:
x: A pytorch tensor. The initial value at time `s`.
s: A pytorch tensor. The starting time, with the shape (x.shape[... |
DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`.
Args:
x: A pytorch tensor. The initial value at time `s`.
s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
... | dpm_solver_first_update | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def singlestep_dpm_solver_second_update(self, x, s, t, r1=0.5, model_s=None, return_intermediate=False,
solver_type='dpm_solver'):
"""
Singlestep solver DPM-Solver-2 from time `s` to time `t`.
Args:
x: A pytorch tensor. The initial value at... |
Singlestep solver DPM-Solver-2 from time `s` to time `t`.
Args:
x: A pytorch tensor. The initial value at time `s`.
s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
r... | singlestep_dpm_solver_second_update | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def singlestep_dpm_solver_third_update(self, x, s, t, r1=1. / 3., r2=2. / 3., model_s=None, model_s1=None,
return_intermediate=False, solver_type='dpm_solver'):
"""
Singlestep solver DPM-Solver-3 from time `s` to time `t`.
Args:
x: A pytorch... |
Singlestep solver DPM-Solver-3 from time `s` to time `t`.
Args:
x: A pytorch tensor. The initial value at time `s`.
s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
r... | singlestep_dpm_solver_third_update | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def multistep_dpm_solver_second_update(self, x, model_prev_list, t_prev_list, t, solver_type="dpm_solver"):
"""
Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`.
Args:
x: A pytorch tensor. The initial value at time `s`.
model_prev_list: A list of pyto... |
Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`.
Args:
x: A pytorch tensor. The initial value at time `s`.
model_prev_list: A list of pytorch tensor. The previous computed model values.
t_prev_list: A list of pytorch tensor. The previous times, ... | multistep_dpm_solver_second_update | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def multistep_dpm_solver_third_update(self, x, model_prev_list, t_prev_list, t, solver_type='dpm_solver'):
"""
Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`.
Args:
x: A pytorch tensor. The initial value at time `s`.
model_prev_list: A list of pytor... |
Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`.
Args:
x: A pytorch tensor. The initial value at time `s`.
model_prev_list: A list of pytorch tensor. The previous computed model values.
t_prev_list: A list of pytorch tensor. The previous times, ... | multistep_dpm_solver_third_update | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def singlestep_dpm_solver_update(self, x, s, t, order, return_intermediate=False, solver_type='dpm_solver', r1=None,
r2=None):
"""
Singlestep DPM-Solver with the order `order` from time `s` to time `t`.
Args:
x: A pytorch tensor. The initial value... |
Singlestep DPM-Solver with the order `order` from time `s` to time `t`.
Args:
x: A pytorch tensor. The initial value at time `s`.
s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
t: A pytorch tensor. The ending time, with the shape (x.shape[0],).... | singlestep_dpm_solver_update | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def multistep_dpm_solver_update(self, x, model_prev_list, t_prev_list, t, order, solver_type='dpm_solver'):
"""
Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`.
Args:
x: A pytorch tensor. The initial value at time `s`.
model_prev_list: ... |
Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`.
Args:
x: A pytorch tensor. The initial value at time `s`.
model_prev_list: A list of pytorch tensor. The previous computed model values.
t_prev_list: A list of pytorch tensor. The pr... | multistep_dpm_solver_update | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def sample(self, x, steps=20, t_start=None, t_end=None, order=3, skip_type='time_uniform',
method='singlestep', lower_order_final=True, denoise_to_zero=False, solver_type='dpm_solver',
atol=0.0078, rtol=0.05,
):
"""
Compute the sample at time `t_end` by DPM-S... |
Compute the sample at time `t_end` by DPM-Solver, given the initial `x` at time `t_start`.
=====================================================
We support the following algorithms for both noise prediction model and data prediction model:
- 'singlestep':
Singlestep ... | sample | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def interpolate_fn(x, xp, yp):
"""
A piecewise linear function y = f(x), using xp and yp as keypoints.
We implement f(x) in a differentiable way (i.e. applicable for autograd).
The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the... |
A piecewise linear function y = f(x), using xp and yp as keypoints.
We implement f(x) in a differentiable way (i.e. applicable for autograd).
The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.)
Args:
x... | interpolate_fn | python | ali-vilab/AnyDoor | ldm/models/diffusion/dpm_solver/dpm_solver.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py | MIT |
def zero_module(module):
"""
Zero out the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().zero_()
return module |
Zero out the parameters of a module and return it.
| zero_module | python | ali-vilab/AnyDoor | ldm/modules/attention.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/attention.py | MIT |
def get_timestep_embedding(timesteps, embedding_dim):
"""
This matches the implementation in Denoising Diffusion Probabilistic Models:
From Fairseq.
Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly
from the description in Section 3.5 of "Attenti... |
This matches the implementation in Denoising Diffusion Probabilistic Models:
From Fairseq.
Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly
from the description in Section 3.5 of "Attention Is All You Need".
| get_timestep_embedding | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/model.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/model.py | MIT |
def forward(self, x, emb):
"""
Apply the module to `x` given `emb` timestep embeddings.
""" |
Apply the module to `x` given `emb` timestep embeddings.
| forward | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/openaimodel.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/openaimodel.py | MIT |
def forward(self, x, emb):
"""
Apply the block to a Tensor, conditioned on a timestep embedding.
:param x: an [N x C x ...] Tensor of features.
:param emb: an [N x emb_channels] Tensor of timestep embeddings.
:return: an [N x C x ...] Tensor of outputs.
"""
return... |
Apply the block to a Tensor, conditioned on a timestep embedding.
:param x: an [N x C x ...] Tensor of features.
:param emb: an [N x emb_channels] Tensor of timestep embeddings.
:return: an [N x C x ...] Tensor of outputs.
| forward | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/openaimodel.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/openaimodel.py | MIT |
def count_flops_attn(model, _x, y):
"""
A counter for the `thop` package to count the operations in an
attention operation.
Meant to be used like:
macs, params = thop.profile(
model,
inputs=(inputs, timestamps),
custom_ops={QKVAttention: QKVAttention.count_flo... |
A counter for the `thop` package to count the operations in an
attention operation.
Meant to be used like:
macs, params = thop.profile(
model,
inputs=(inputs, timestamps),
custom_ops={QKVAttention: QKVAttention.count_flops},
)
| count_flops_attn | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/openaimodel.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/openaimodel.py | MIT |
def forward(self, qkv):
"""
Apply QKV attention.
:param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.
:return: an [N x (H * C) x T] tensor after attention.
"""
bs, width, length = qkv.shape
assert width % (3 * self.n_heads) == 0
ch = width // (3 ... |
Apply QKV attention.
:param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.
:return: an [N x (H * C) x T] tensor after attention.
| forward | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/openaimodel.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/openaimodel.py | MIT |
def forward(self, qkv):
"""
Apply QKV attention.
:param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs.
:return: an [N x (H * C) x T] tensor after attention.
"""
bs, width, length = qkv.shape
assert width % (3 * self.n_heads) == 0
ch = width // (3 ... |
Apply QKV attention.
:param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs.
:return: an [N x (H * C) x T] tensor after attention.
| forward | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/openaimodel.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/openaimodel.py | MIT |
def forward(self, x, timesteps=None, context=None, y=None,**kwargs):
"""
Apply the model to an input batch.
:param x: an [N x C x ...] Tensor of inputs.
:param timesteps: a 1-D batch of timesteps.
:param context: conditioning plugged in via crossattn
:param y: an [N] Tens... |
Apply the model to an input batch.
:param x: an [N x C x ...] Tensor of inputs.
:param timesteps: a 1-D batch of timesteps.
:param context: conditioning plugged in via crossattn
:param y: an [N] Tensor of labels, if class-conditional.
:return: an [N x C x ...] Tensor of ... | forward | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/openaimodel.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/openaimodel.py | MIT |
def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):
"""
Create a beta schedule that discretizes the given alpha_t_bar function,
which defines the cumulative product of (1-beta) over time from t = [0,1].
:param num_diffusion_timesteps: the number of betas to produce.
:param a... |
Create a beta schedule that discretizes the given alpha_t_bar function,
which defines the cumulative product of (1-beta) over time from t = [0,1].
:param num_diffusion_timesteps: the number of betas to produce.
:param alpha_bar: a lambda that takes an argument t from 0 to 1 and
pr... | betas_for_alpha_bar | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/util.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/util.py | MIT |
def checkpoint(func, inputs, params, flag):
"""
Evaluate a function without caching intermediate activations, allowing for
reduced memory at the expense of extra compute in the backward pass.
:param func: the function to evaluate.
:param inputs: the argument sequence to pass to `func`.
:param pa... |
Evaluate a function without caching intermediate activations, allowing for
reduced memory at the expense of extra compute in the backward pass.
:param func: the function to evaluate.
:param inputs: the argument sequence to pass to `func`.
:param params: a sequence of parameters `func` depends on bu... | checkpoint | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/util.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/util.py | MIT |
def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):
"""
Create sinusoidal timestep embeddings.
:param timesteps: a 1-D Tensor of N indices, one per batch element.
These may be fractional.
:param dim: the dimension of the output.
:param max_period: contr... |
Create sinusoidal timestep embeddings.
:param timesteps: a 1-D Tensor of N indices, one per batch element.
These may be fractional.
:param dim: the dimension of the output.
:param max_period: controls the minimum frequency of the embeddings.
:return: an [N x dim] Tensor of pos... | timestep_embedding | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/util.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/util.py | MIT |
def zero_module(module):
"""
Zero out the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().zero_()
return module |
Zero out the parameters of a module and return it.
| zero_module | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/util.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/util.py | MIT |
def scale_module(module, scale):
"""
Scale the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().mul_(scale)
return module |
Scale the parameters of a module and return it.
| scale_module | python | ali-vilab/AnyDoor | ldm/modules/diffusionmodules/util.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/diffusionmodules/util.py | MIT |
def normal_kl(mean1, logvar1, mean2, logvar2):
"""
source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12
Compute the KL divergence between two gaussians.
Shapes are automatically broadcasted, so batches can be compared to
scal... |
source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12
Compute the KL divergence between two gaussians.
Shapes are automatically broadcasted, so batches can be compared to
scalars, among other use cases.
| normal_kl | python | ali-vilab/AnyDoor | ldm/modules/distributions/distributions.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/distributions/distributions.py | MIT |
def modcrop_np(img, sf):
'''
Args:
img: numpy image, WxH or WxHxC
sf: scale factor
Return:
cropped image
'''
w, h = img.shape[:2]
im = np.copy(img)
return im[:w - w % sf, :h - h % sf, ...] |
Args:
img: numpy image, WxH or WxHxC
sf: scale factor
Return:
cropped image
| modcrop_np | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def analytic_kernel(k):
"""Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)"""
k_size = k.shape[0]
# Calculate the big kernels size
big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))
# Loop over the small kernel to fill the big one
for r in range(k_size):
for ... | Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper) | analytic_kernel | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
""" generate an anisotropic Gaussian kernel
Args:
ksize : e.g., 15, kernel size
theta : [0, pi], rotation angle range
l1 : [0.1,50], scaling of eigenvalues
l2 : [0.1,l1], scaling of eigenvalues
If l1 = l2... | generate an anisotropic Gaussian kernel
Args:
ksize : e.g., 15, kernel size
theta : [0, pi], rotation angle range
l1 : [0.1,50], scaling of eigenvalues
l2 : [0.1,l1], scaling of eigenvalues
If l1 = l2, will get an isotropic Gaussian kernel.
Returns:
k ... | anisotropic_Gaussian | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def shift_pixel(x, sf, upper_left=True):
"""shift pixel for super-resolution with different scale factors
Args:
x: WxHxC or WxH
sf: scale factor
upper_left: shift direction
"""
h, w = x.shape[:2]
shift = (sf - 1) * 0.5
xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)
... | shift pixel for super-resolution with different scale factors
Args:
x: WxHxC or WxH
sf: scale factor
upper_left: shift direction
| shift_pixel | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):
""""
# modified version of https://github.com/assafshocher/BlindSR_dataset_generator
# Kai Zhang
# min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var an... | "
# modified version of https://github.com/assafshocher/BlindSR_dataset_generator
# Kai Zhang
# min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var
# max_var = 2.5 * sf
| gen_kernel | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def fspecial(filter_type, *args, **kwargs):
'''
python code from:
https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
'''
if filter_type == 'gaussian':
return fspecial_gaussian(*args, **kwargs)
if... |
python code from:
https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
| fspecial | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def srmd_degradation(x, k, sf=3):
''' blur + bicubic downsampling
Args:
x: HxWxC image, [0, 1]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
Reference:
@inproceedings{zhang2018learning,
title={Learning a single convolutional super-res... | blur + bicubic downsampling
Args:
x: HxWxC image, [0, 1]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
Reference:
@inproceedings{zhang2018learning,
title={Learning a single convolutional super-resolution network for multiple degradations... | srmd_degradation | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def dpsr_degradation(x, k, sf=3):
''' bicubic downsampling + blur
Args:
x: HxWxC image, [0, 1]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
Reference:
@inproceedings{zhang2019deep,
title={Deep Plug-and-Play Super-Resolution for Arbit... | bicubic downsampling + blur
Args:
x: HxWxC image, [0, 1]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
Reference:
@inproceedings{zhang2019deep,
title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},
author={Zha... | dpsr_degradation | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def classical_degradation(x, k, sf=3):
''' blur + downsampling
Args:
x: HxWxC image, [0, 1]/[0, 255]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
'''
x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
# x = filters.correla... | blur + downsampling
Args:
x: HxWxC image, [0, 1]/[0, 255]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
| classical_degradation | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def add_sharpening(img, weight=0.5, radius=50, threshold=10):
"""USM sharpening. borrowed from real-ESRGAN
Input image: I; Blurry image: B.
1. K = I + weight * (I - B)
2. Mask = 1 if abs(I - B) > threshold, else: 0
3. Blur mask:
4. Out = Mask * K + (1 - Mask) * I
Args:
img (Numpy arr... | USM sharpening. borrowed from real-ESRGAN
Input image: I; Blurry image: B.
1. K = I + weight * (I - B)
2. Mask = 1 if abs(I - B) > threshold, else: 0
3. Blur mask:
4. Out = Mask * K + (1 - Mask) * I
Args:
img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
weight (float): ... | add_sharpening | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):
"""
This is the degradation model of BSRGAN from the paper
"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
----------
img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizex... |
This is the degradation model of BSRGAN from the paper
"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
----------
img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
sf: scale factor
isp_model: camera ISP model
Returns
--... | degradation_bsrgan | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def degradation_bsrgan_variant(image, sf=4, isp_model=None):
"""
This is the degradation model of BSRGAN from the paper
"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
----------
sf: scale factor
isp_model: camera ISP model
Returns
-------
img: low-qua... |
This is the degradation model of BSRGAN from the paper
"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
----------
sf: scale factor
isp_model: camera ISP model
Returns
-------
img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
hq:... | degradation_bsrgan_variant | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None):
"""
This is an extended degradation model by combining
the degradation models of BSRGAN and Real-ESRGAN
----------
img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchs... |
This is an extended degradation model by combining
the degradation models of BSRGAN and Real-ESRGAN
----------
img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
sf: scale factor
use_shuffle: the degradation shuffle
use_sharp: sharpening the img
Return... | degradation_bsrgan_plus | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan.py | MIT |
def modcrop_np(img, sf):
'''
Args:
img: numpy image, WxH or WxHxC
sf: scale factor
Return:
cropped image
'''
w, h = img.shape[:2]
im = np.copy(img)
return im[:w - w % sf, :h - h % sf, ...] |
Args:
img: numpy image, WxH or WxHxC
sf: scale factor
Return:
cropped image
| modcrop_np | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def analytic_kernel(k):
"""Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)"""
k_size = k.shape[0]
# Calculate the big kernels size
big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))
# Loop over the small kernel to fill the big one
for r in range(k_size):
for ... | Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper) | analytic_kernel | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
""" generate an anisotropic Gaussian kernel
Args:
ksize : e.g., 15, kernel size
theta : [0, pi], rotation angle range
l1 : [0.1,50], scaling of eigenvalues
l2 : [0.1,l1], scaling of eigenvalues
If l1 = l2... | generate an anisotropic Gaussian kernel
Args:
ksize : e.g., 15, kernel size
theta : [0, pi], rotation angle range
l1 : [0.1,50], scaling of eigenvalues
l2 : [0.1,l1], scaling of eigenvalues
If l1 = l2, will get an isotropic Gaussian kernel.
Returns:
k ... | anisotropic_Gaussian | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def shift_pixel(x, sf, upper_left=True):
"""shift pixel for super-resolution with different scale factors
Args:
x: WxHxC or WxH
sf: scale factor
upper_left: shift direction
"""
h, w = x.shape[:2]
shift = (sf - 1) * 0.5
xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)
... | shift pixel for super-resolution with different scale factors
Args:
x: WxHxC or WxH
sf: scale factor
upper_left: shift direction
| shift_pixel | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):
""""
# modified version of https://github.com/assafshocher/BlindSR_dataset_generator
# Kai Zhang
# min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var an... | "
# modified version of https://github.com/assafshocher/BlindSR_dataset_generator
# Kai Zhang
# min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var
# max_var = 2.5 * sf
| gen_kernel | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def fspecial(filter_type, *args, **kwargs):
'''
python code from:
https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
'''
if filter_type == 'gaussian':
return fspecial_gaussian(*args, **kwargs)
if... |
python code from:
https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
| fspecial | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def srmd_degradation(x, k, sf=3):
''' blur + bicubic downsampling
Args:
x: HxWxC image, [0, 1]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
Reference:
@inproceedings{zhang2018learning,
title={Learning a single convolutional super-res... | blur + bicubic downsampling
Args:
x: HxWxC image, [0, 1]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
Reference:
@inproceedings{zhang2018learning,
title={Learning a single convolutional super-resolution network for multiple degradations... | srmd_degradation | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def dpsr_degradation(x, k, sf=3):
''' bicubic downsampling + blur
Args:
x: HxWxC image, [0, 1]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
Reference:
@inproceedings{zhang2019deep,
title={Deep Plug-and-Play Super-Resolution for Arbit... | bicubic downsampling + blur
Args:
x: HxWxC image, [0, 1]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
Reference:
@inproceedings{zhang2019deep,
title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},
author={Zha... | dpsr_degradation | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def classical_degradation(x, k, sf=3):
''' blur + downsampling
Args:
x: HxWxC image, [0, 1]/[0, 255]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
'''
x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
# x = filters.correlate(x, np... | blur + downsampling
Args:
x: HxWxC image, [0, 1]/[0, 255]
k: hxw, double
sf: down-scale factor
Return:
downsampled LR image
| classical_degradation | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def add_sharpening(img, weight=0.5, radius=50, threshold=10):
"""USM sharpening. borrowed from real-ESRGAN
Input image: I; Blurry image: B.
1. K = I + weight * (I - B)
2. Mask = 1 if abs(I - B) > threshold, else: 0
3. Blur mask:
4. Out = Mask * K + (1 - Mask) * I
Args:
img (Numpy arr... | USM sharpening. borrowed from real-ESRGAN
Input image: I; Blurry image: B.
1. K = I + weight * (I - B)
2. Mask = 1 if abs(I - B) > threshold, else: 0
3. Blur mask:
4. Out = Mask * K + (1 - Mask) * I
Args:
img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
weight (float): ... | add_sharpening | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):
"""
This is the degradation model of BSRGAN from the paper
"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
----------
img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizex... |
This is the degradation model of BSRGAN from the paper
"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
----------
img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
sf: scale factor
isp_model: camera ISP model
Returns
--... | degradation_bsrgan | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def degradation_bsrgan_variant(image, sf=4, isp_model=None, up=False):
"""
This is the degradation model of BSRGAN from the paper
"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
----------
sf: scale factor
isp_model: camera ISP model
Returns
-------
im... |
This is the degradation model of BSRGAN from the paper
"Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
----------
sf: scale factor
isp_model: camera ISP model
Returns
-------
img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
hq:... | degradation_bsrgan_variant | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/bsrgan_light.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/bsrgan_light.py | MIT |
def imssave(imgs, img_path):
"""
imgs: list, N images of size WxHxC
"""
img_name, ext = os.path.splitext(os.path.basename(img_path))
for i, img in enumerate(imgs):
if img.ndim == 3:
img = img[:, :, [2, 1, 0]]
new_path = os.path.join(os.path.dirname(img_path), img_name+st... |
imgs: list, N images of size WxHxC
| imssave | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/utils_image.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/utils_image.py | MIT |
def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000):
"""
split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size),
and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max)
... |
split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size),
and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max)
will be splitted.
Args:
original_dataroot:
taget_dataroot:
p_size: size of small image... | split_imageset | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/utils_image.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/utils_image.py | MIT |
def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)):
'''
Converts a torch Tensor into an image Numpy array of BGR channel order
Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order
Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default)
'''
tensor = tensor.squeeze(... |
Converts a torch Tensor into an image Numpy array of BGR channel order
Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order
Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default)
| tensor2img | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/utils_image.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/utils_image.py | MIT |
def augment_img(img, mode=0):
'''Kai Zhang (github: https://github.com/cszn)
'''
if mode == 0:
return img
elif mode == 1:
return np.flipud(np.rot90(img))
elif mode == 2:
return np.flipud(img)
elif mode == 3:
return np.rot90(img, k=3)
elif mode == 4:
re... | Kai Zhang (github: https://github.com/cszn)
| augment_img | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/utils_image.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/utils_image.py | MIT |
def augment_img_tensor4(img, mode=0):
'''Kai Zhang (github: https://github.com/cszn)
'''
if mode == 0:
return img
elif mode == 1:
return img.rot90(1, [2, 3]).flip([2])
elif mode == 2:
return img.flip([2])
elif mode == 3:
return img.rot90(3, [2, 3])
elif mode =... | Kai Zhang (github: https://github.com/cszn)
| augment_img_tensor4 | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/utils_image.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/utils_image.py | MIT |
def augment_img_tensor(img, mode=0):
'''Kai Zhang (github: https://github.com/cszn)
'''
img_size = img.size()
img_np = img.data.cpu().numpy()
if len(img_size) == 3:
img_np = np.transpose(img_np, (1, 2, 0))
elif len(img_size) == 4:
img_np = np.transpose(img_np, (2, 3, 1, 0))
i... | Kai Zhang (github: https://github.com/cszn)
| augment_img_tensor | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/utils_image.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/utils_image.py | MIT |
def rgb2ycbcr(img, only_y=True):
'''same as matlab rgb2ycbcr
only_y: only return Y channel
Input:
uint8, [0, 255]
float, [0, 1]
'''
in_img_type = img.dtype
img.astype(np.float32)
if in_img_type != np.uint8:
img *= 255.
# convert
if only_y:
rlt = np.dot... | same as matlab rgb2ycbcr
only_y: only return Y channel
Input:
uint8, [0, 255]
float, [0, 1]
| rgb2ycbcr | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/utils_image.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/utils_image.py | MIT |
def bgr2ycbcr(img, only_y=True):
'''bgr version of rgb2ycbcr
only_y: only return Y channel
Input:
uint8, [0, 255]
float, [0, 1]
'''
in_img_type = img.dtype
img.astype(np.float32)
if in_img_type != np.uint8:
img *= 255.
# convert
if only_y:
rlt = np.dot... | bgr version of rgb2ycbcr
only_y: only return Y channel
Input:
uint8, [0, 255]
float, [0, 1]
| bgr2ycbcr | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/utils_image.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/utils_image.py | MIT |
def calculate_ssim(img1, img2, border=0):
'''calculate SSIM
the same outputs as MATLAB's
img1, img2: [0, 255]
'''
#img1 = img1.squeeze()
#img2 = img2.squeeze()
if not img1.shape == img2.shape:
raise ValueError('Input images must have the same dimensions.')
h, w = img1.shape[:2]
... | calculate SSIM
the same outputs as MATLAB's
img1, img2: [0, 255]
| calculate_ssim | python | ali-vilab/AnyDoor | ldm/modules/image_degradation/utils_image.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/image_degradation/utils_image.py | MIT |
def read_pfm(path):
"""Read pfm file.
Args:
path (str): path to file
Returns:
tuple: (data, scale)
"""
with open(path, "rb") as file:
color = None
width = None
height = None
scale = None
endian = None
header = file.readline().rstrip... | Read pfm file.
Args:
path (str): path to file
Returns:
tuple: (data, scale)
| read_pfm | python | ali-vilab/AnyDoor | ldm/modules/midas/utils.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/utils.py | MIT |
def write_pfm(path, image, scale=1):
"""Write pfm file.
Args:
path (str): pathto file
image (array): data
scale (int, optional): Scale. Defaults to 1.
"""
with open(path, "wb") as file:
color = None
if image.dtype.name != "float32":
raise Exception(... | Write pfm file.
Args:
path (str): pathto file
image (array): data
scale (int, optional): Scale. Defaults to 1.
| write_pfm | python | ali-vilab/AnyDoor | ldm/modules/midas/utils.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/utils.py | MIT |
def read_image(path):
"""Read image and output RGB image (0-1).
Args:
path (str): path to file
Returns:
array: RGB image (0-1)
"""
img = cv2.imread(path)
if img.ndim == 2:
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255... | Read image and output RGB image (0-1).
Args:
path (str): path to file
Returns:
array: RGB image (0-1)
| read_image | python | ali-vilab/AnyDoor | ldm/modules/midas/utils.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/utils.py | MIT |
def resize_image(img):
"""Resize image and make it fit for network.
Args:
img (array): image
Returns:
tensor: data ready for network
"""
height_orig = img.shape[0]
width_orig = img.shape[1]
if width_orig > height_orig:
scale = width_orig / 384
else:
sca... | Resize image and make it fit for network.
Args:
img (array): image
Returns:
tensor: data ready for network
| resize_image | python | ali-vilab/AnyDoor | ldm/modules/midas/utils.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/utils.py | MIT |
def resize_depth(depth, width, height):
"""Resize depth map and bring to CPU (numpy).
Args:
depth (tensor): depth
width (int): image width
height (int): image height
Returns:
array: processed depth
"""
depth = torch.squeeze(depth[0, :, :, :]).to("cpu")
depth_re... | Resize depth map and bring to CPU (numpy).
Args:
depth (tensor): depth
width (int): image width
height (int): image height
Returns:
array: processed depth
| resize_depth | python | ali-vilab/AnyDoor | ldm/modules/midas/utils.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/utils.py | MIT |
def write_depth(path, depth, bits=1):
"""Write depth map to pfm and png file.
Args:
path (str): filepath without extension
depth (array): depth
"""
write_pfm(path + ".pfm", depth.astype(np.float32))
depth_min = depth.min()
depth_max = depth.max()
max_val = (2**(8*bits))-1
... | Write depth map to pfm and png file.
Args:
path (str): filepath without extension
depth (array): depth
| write_depth | python | ali-vilab/AnyDoor | ldm/modules/midas/utils.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/utils.py | MIT |
def load(self, path):
"""Load model from file.
Args:
path (str): file path
"""
parameters = torch.load(path, map_location=torch.device('cpu'))
if "optimizer" in parameters:
parameters = parameters["model"]
self.load_state_dict(parameters) | Load model from file.
Args:
path (str): file path
| load | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/base_model.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/base_model.py | MIT |
def __init__(self, scale_factor, mode, align_corners=False):
"""Init.
Args:
scale_factor (float): scaling
mode (str): interpolation mode
"""
super(Interpolate, self).__init__()
self.interp = nn.functional.interpolate
self.scale_factor = scale_fac... | Init.
Args:
scale_factor (float): scaling
mode (str): interpolation mode
| __init__ | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/blocks.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/blocks.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input
Returns:
tensor: interpolated data
"""
x = self.interp(
x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners
)
return x | Forward pass.
Args:
x (tensor): input
Returns:
tensor: interpolated data
| forward | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/blocks.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/blocks.py | MIT |
def __init__(self, features):
"""Init.
Args:
features (int): number of features
"""
super().__init__()
self.conv1 = nn.Conv2d(
features, features, kernel_size=3, stride=1, padding=1, bias=True
)
self.conv2 = nn.Conv2d(
featur... | Init.
Args:
features (int): number of features
| __init__ | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/blocks.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/blocks.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
"""
out = self.relu(x)
out = self.conv1(out)
out = self.relu(out)
out = self.conv2(out)
return out + x | Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
| forward | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/blocks.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/blocks.py | MIT |
def __init__(self, features):
"""Init.
Args:
features (int): number of features
"""
super(FeatureFusionBlock, self).__init__()
self.resConfUnit1 = ResidualConvUnit(features)
self.resConfUnit2 = ResidualConvUnit(features) | Init.
Args:
features (int): number of features
| __init__ | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/blocks.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/blocks.py | MIT |
def __init__(self, features, activation, bn):
"""Init.
Args:
features (int): number of features
"""
super().__init__()
self.bn = bn
self.groups=1
self.conv1 = nn.Conv2d(
features, features, kernel_size=3, stride=1, padding=1, bias=True,... | Init.
Args:
features (int): number of features
| __init__ | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/blocks.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/blocks.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
"""
out = self.activation(x)
out = self.conv1(out)
if self.bn==True:
out = self.bn1(out)
out = self.activation(out... | Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
| forward | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/blocks.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/blocks.py | MIT |
def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True):
"""Init.
Args:
features (int): number of features
"""
super(FeatureFusionBlock_custom, self).__init__()
self.deconv = deconv
self.align_corners = align_corner... | Init.
Args:
features (int): number of features
| __init__ | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/blocks.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/blocks.py | MIT |
def __init__(self, path=None, features=256, non_negative=True):
"""Init.
Args:
path (str, optional): Path to saved model. Defaults to None.
features (int, optional): Number of features. Defaults to 256.
backbone (str, optional): Backbone network for encoder. Defaults... | Init.
Args:
path (str, optional): Path to saved model. Defaults to None.
features (int, optional): Number of features. Defaults to 256.
backbone (str, optional): Backbone network for encoder. Defaults to resnet50
| __init__ | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/midas_net.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/midas_net.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input data (image)
Returns:
tensor: depth
"""
layer_1 = self.pretrained.layer1(x)
layer_2 = self.pretrained.layer2(layer_1)
layer_3 = self.pretrained.layer3(layer_2)
layer_... | Forward pass.
Args:
x (tensor): input data (image)
Returns:
tensor: depth
| forward | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/midas_net.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/midas_net.py | MIT |
def __init__(self, path=None, features=64, backbone="efficientnet_lite3", non_negative=True, exportable=True, channels_last=False, align_corners=True,
blocks={'expand': True}):
"""Init.
Args:
path (str, optional): Path to saved model. Defaults to None.
features (int, opt... | Init.
Args:
path (str, optional): Path to saved model. Defaults to None.
features (int, optional): Number of features. Defaults to 256.
backbone (str, optional): Backbone network for encoder. Defaults to resnet50
| __init__ | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/midas_net_custom.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/midas_net_custom.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input data (image)
Returns:
tensor: depth
"""
if self.channels_last==True:
print("self.channels_last = ", self.channels_last)
x.contiguous(memory_format=torch.channels_last)... | Forward pass.
Args:
x (tensor): input data (image)
Returns:
tensor: depth
| forward | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/midas_net_custom.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/midas_net_custom.py | MIT |
def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):
"""Rezise the sample to ensure the given size. Keeps aspect ratio.
Args:
sample (dict): sample
size (tuple): image size
Returns:
tuple: new size
"""
shape = list(sample["disparity"].shape)
if ... | Rezise the sample to ensure the given size. Keeps aspect ratio.
Args:
sample (dict): sample
size (tuple): image size
Returns:
tuple: new size
| apply_min_size | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/transforms.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/transforms.py | MIT |
def __init__(
self,
width,
height,
resize_target=True,
keep_aspect_ratio=False,
ensure_multiple_of=1,
resize_method="lower_bound",
image_interpolation_method=cv2.INTER_AREA,
):
"""Init.
Args:
width (int): desired output wid... | Init.
Args:
width (int): desired output width
height (int): desired output height
resize_target (bool, optional):
True: Resize the full sample (image, mask, target).
False: Resize image only.
Defaults to True.
keep_... | __init__ | python | ali-vilab/AnyDoor | ldm/modules/midas/midas/transforms.py | https://github.com/ali-vilab/AnyDoor/blob/master/ldm/modules/midas/midas/transforms.py | MIT |
def convert(
type: str,
project: str,
cloud: bool,
workspace: str,
logdir: str,
tb_logdir: str,
wb_project: str,
wb_entity: str,
wb_runid: str,
mlflow_uri: str,
mlflow_exp: str,
**kwargs,
):
"""Convert the log files of o... | Convert the log files of other experiment tracking tools to SwanLab. | convert | python | SwanHubX/SwanLab | swanlab/cli/commands/converter/__init__.py | https://github.com/SwanHubX/SwanLab/blob/master/swanlab/cli/commands/converter/__init__.py | Apache-2.0 |
def log(
data: Dict[str, DataType],
step: int = None,
print_to_console: bool = False,
):
"""
Log a row of data to the current run.
We recommend that you log data by SwanLabRun.log() method, but you can also use this function to log data.
Parameters
----------
data : Dict[str, DataTy... |
Log a row of data to the current run.
We recommend that you log data by SwanLabRun.log() method, but you can also use this function to log data.
Parameters
----------
data : Dict[str, DataType]
Data must be a dict.
The key must be a string with 0-9, a-z, A-Z, " ", "_", "-", "/".
... | log | python | SwanHubX/SwanLab | swanlab/data/sdk.py | https://github.com/SwanHubX/SwanLab/blob/master/swanlab/data/sdk.py | Apache-2.0 |
def finish(state: SwanLabRunState = SwanLabRunState.SUCCESS, error=None):
"""
Finish the current run and close the current experiment
Normally, swanlab will run this function automatically,
but you can also execute it manually and mark the experiment as 'completed'.
Once the experiment is marked as ... |
Finish the current run and close the current experiment
Normally, swanlab will run this function automatically,
but you can also execute it manually and mark the experiment as 'completed'.
Once the experiment is marked as 'completed', no more data can be logged to the experiment by 'swanlab.log'.
I... | finish | python | SwanHubX/SwanLab | swanlab/data/sdk.py | https://github.com/SwanHubX/SwanLab/blob/master/swanlab/data/sdk.py | Apache-2.0 |
def get_full_typename(o: Any) -> Any:
"""Determine types based on type names.
Avoids needing to import (and therefore depend on) PyTorch, TensorFlow, etc.
"""
instance_name = o.__class__.__module__ + "." + o.__class__.__name__
if instance_name in ["builtins.module", "__builtin__.module"]:
r... | Determine types based on type names.
Avoids needing to import (and therefore depend on) PyTorch, TensorFlow, etc.
| get_full_typename | python | SwanHubX/SwanLab | swanlab/data/modules/image/__init__.py | https://github.com/SwanHubX/SwanLab/blob/master/swanlab/data/modules/image/__init__.py | Apache-2.0 |
def __post_init__(self):
"""Validate input data after initialization.
Checks:
1. File exists
2. Path points to a file
3. File has .glb extension
Raises:
FileNotFoundError: If GLB file does not exist
ValueError: If path is not a file o... | Validate input data after initialization.
Checks:
1. File exists
2. Path points to a file
3. File has .glb extension
Raises:
FileNotFoundError: If GLB file does not exist
ValueError: If path is not a file or not a GLB file
| __post_init__ | python | SwanHubX/SwanLab | swanlab/data/modules/object3d/model3d.py | https://github.com/SwanHubX/SwanLab/blob/master/swanlab/data/modules/object3d/model3d.py | Apache-2.0 |
def parse(self) -> Tuple[str, MediaBuffer]:
"""Convert model to buffer for transmission.
This method reads the GLB file and prepares it for transmission. It:
1. Reads the file content into a buffer
2. Generates a unique hash from the content
3. Creates a unique filename using th... | Convert model to buffer for transmission.
This method reads the GLB file and prepares it for transmission. It:
1. Reads the file content into a buffer
2. Generates a unique hash from the content
3. Creates a unique filename using the original name, step, and hash
Returns:
... | parse | python | SwanHubX/SwanLab | swanlab/data/modules/object3d/model3d.py | https://github.com/SwanHubX/SwanLab/blob/master/swanlab/data/modules/object3d/model3d.py | Apache-2.0 |
def from_mol(cls, mol: Mol, *, caption: Optional[str] = None, **kwargs) -> "Molecule":
"""Creates a Molecule instance from an RDKit Mol object.
Args:
mol: The RDKit Mol object.
caption: Optional descriptive text.
Returns:
Molecule: A new Molecule instance.
... | Creates a Molecule instance from an RDKit Mol object.
Args:
mol: The RDKit Mol object.
caption: Optional descriptive text.
Returns:
Molecule: A new Molecule instance.
Raises:
ImportError: If RDKit is not available.
Examples:
... | from_mol | python | SwanHubX/SwanLab | swanlab/data/modules/object3d/molecule.py | https://github.com/SwanHubX/SwanLab/blob/master/swanlab/data/modules/object3d/molecule.py | Apache-2.0 |
def from_pdb_file(cls, pdb_file: Union[Path, str], *, caption: Optional[str] = None, **kwargs) -> "Molecule":
"""Creates a Molecule instance from a PDB file by reading the file content directly.
Args:
pdb_file: Path to the PDB file.
caption: Optional descriptive text.
R... | Creates a Molecule instance from a PDB file by reading the file content directly.
Args:
pdb_file: Path to the PDB file.
caption: Optional descriptive text.
Returns:
Molecule: A new Molecule instance.
Raises:
ValueError: If RDKit is not available... | from_pdb_file | python | SwanHubX/SwanLab | swanlab/data/modules/object3d/molecule.py | https://github.com/SwanHubX/SwanLab/blob/master/swanlab/data/modules/object3d/molecule.py | Apache-2.0 |
def from_sdf_file(cls, sdf_file: Path, *, caption: Optional[str] = None, **kwargs) -> "Molecule":
"""Creates a Molecule instance from an SDF file.
Args:
sdf_file: Path to the SDF file.
caption: Optional descriptive text.
Returns:
Molecule: A new Molecule ins... | Creates a Molecule instance from an SDF file.
Args:
sdf_file: Path to the SDF file.
caption: Optional descriptive text.
Returns:
Molecule: A new Molecule instance.
Raises:
ImportError: If RDKit is not available.
| from_sdf_file | python | SwanHubX/SwanLab | swanlab/data/modules/object3d/molecule.py | https://github.com/SwanHubX/SwanLab/blob/master/swanlab/data/modules/object3d/molecule.py | Apache-2.0 |
def from_smiles(cls, smiles: str, *, caption: Optional[str] = None, **kwargs) -> "Molecule":
"""Creates a Molecule instance from a SMILES string.
Args:
smiles: The SMILES string.
caption: Optional descriptive text.
Returns:
Molecule: A new Molecule instance.... | Creates a Molecule instance from a SMILES string.
Args:
smiles: The SMILES string.
caption: Optional descriptive text.
Returns:
Molecule: A new Molecule instance.
Raises:
ValueError: If RDKit is not available.
| from_smiles | python | SwanHubX/SwanLab | swanlab/data/modules/object3d/molecule.py | https://github.com/SwanHubX/SwanLab/blob/master/swanlab/data/modules/object3d/molecule.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.