hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5de9ed50b06be8a7b2067ce1cfd848b3c8ae9728 | 40,806 | py | Python | pyfolding/models.py | quantumjot/PyFolding | 3e343644f70d6cfe5e552e7c8ec5da76acb1d8c5 | [
"MIT"
] | 16 | 2017-10-26T15:14:49.000Z | 2022-01-26T10:57:18.000Z | pyfolding/models.py | quantumjot/PyFolding | 3e343644f70d6cfe5e552e7c8ec5da76acb1d8c5 | [
"MIT"
] | 6 | 2021-02-12T07:25:39.000Z | 2021-04-12T10:21:19.000Z | pyfolding/models.py | quantumjot/PyFolding | 3e343644f70d6cfe5e552e7c8ec5da76acb1d8c5 | [
"MIT"
] | 5 | 2017-08-31T18:30:28.000Z | 2021-03-11T04:40:54.000Z | #!/usr/bin/env python
"""
Python implementation of common model fitting operations to
analyse protein folding data. Simply automates some fitting
and value calculation. Will be extended to include phi-value
analysis and other common calculations.
Allows for quick model evaluation and plotting.
Also tried to make this somewhat abstract and modular to
enable more interesting calculations, such as Ising models
and such.
Requirements (recommended python 2.7+):
- numpy
- scipy
- matplotlib
Lowe, A.R. 2015
"""
import sys
import inspect
import numpy as np
import scipy as sp
from . import core
from . import constants
__author__ = "Alan R. Lowe"
__email__ = "a.lowe@ucl.ac.uk"
def list_models():
""" List the kinetic of equilibrium models defined in this module.
Returns a list of the names of the models, whose parent class is
FitModel.
"""
clsmembers = inspect.getmembers(sys.modules[__name__], inspect.isclass)
verif = lambda cls: 'Verified: {0}'.format(cls[1]().verified)
fit_models = [ (cls[0], verif(cls)) for cls in clsmembers if cls[1].__bases__[0] == core.FitModel ]
return fit_models
class TemplateModel(core.FitModel):
""" A template model for expansion
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([])
def fit_func(self, x):
raise NotImplementedError
@property
def equation(self):
return r'F=f(x)'
# F = \frac{\exp( m(x-d_{50})) / RT} { 1+\exp(m(x-d_{50}))/RT}
"""
==========================================================
EQUILIBRIUM FOLDING models
==========================================================
"""
class TwoStateEquilibrium(core.FitModel):
""" Two state equilibrium denaturation curve - No sloping baseline.
Folding Scheme:
N <-> D
Params:
F = Fraction unfolded
m = m-value
x = denaturant concentration (M)
d50 = denaturant midpoint (M)
R = Universal Gas Constant (kcal.mol-1.K-1)
T = Temperature (Kelvin)
Reference:
Clarke and Fersht. Engineered disulfide bonds as probes of
the folding pathway of barnase: Increasing the stability
of proteins against the rate of denaturation.
Biochemistry (1993) vol. 32 (16) pp. 4322-4329
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([1.5, 5.])
self.verified = True
def fit_func(self, x, m, d50):
F = ( np.exp((m*(x-d50))/core.temperature.RT)) / (1.+np.exp((m*(x-d50))/core.temperature.RT))
return F
@property
def equation(self):
return r'F = \frac{\exp( m(x-d_{50})) / RT} { 1+\exp(m(x-d_{50}))/RT}'
class TwoStateEquilibriumSloping(core.FitModel):
""" Two state equilibrium denaturation curve - Sloping baseline.
Folding Scheme:
N <-> D
Params:
F = Fraction unfolded
alpha f = intercept of the native baseline at low denaturation concentrations
beta f = slope/gradient of the native baseline at low denaturation concentrations
alpha u = intercept of the denatured baseline at high denaturation concentrations
beta u = slope/gradient of the denatured baseline at high denaturation concentrations
m = m-value
x = denaturant concentration (M)
d50 = denaturant midpoint (M)
R = Universal Gas Constant (kcal.mol-1.K-1)
T = Temperature (Kelvin)
Reference:
Clarke and Fersht. Engineered disulfide bonds as probes of
the folding pathway of barnase: Increasing the stability
of proteins against the rate of denaturation.
Biochemistry (1993) vol. 32 (16) pp. 4322-4329
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([1., 0.1, 0.0, 0.1, 1.5, 5.])
self.verified = True
def fit_func(self, x, alpha_f, beta_f, alpha_u, beta_u, m, d50):
F = (alpha_f+beta_f*x) + (alpha_u+beta_u*x) * (\
( np.exp((m*(x-d50))/core.temperature.RT)) / (1.+np.exp((m*(x-d50))/core.temperature.RT)))
return F
@property
def equation(self):
return r'F = (\alpha_f+\beta_f x) + (\alpha_u+\beta_u x) \cdot \frac{\exp( m(x-d_{50})) / RT} { 1+\exp(m(x-d_{50}))/RT}'
# NOTE (ergm) added on 30/8/2017 and corrected incorrect asscii for running on PC 8/9/2017
class ThreeStateEquilibrium (core.FitModel):
""" Three state equilbrium denaturation curve.
Folding Scheme:
N <-> I <-> D
Params:
Y_obs = The spectroscopic signal maximum as a function of denaturant concentration
Y_N = spectroscopic signals of the native state
Y_D = spectroscopic signals of the denatured state
F_D = fraction denatured
F_N = fraction native
F_I = fraction intermediate
Kni = equilibrium contstant of unfolding native to intermediate state
Kid = equilibrium contstant of unfolding intermediate to denatured state
DGni = stability of native state relative to intermediate state
m_ni = m-value of native to intermediate transition
DGid = stability of intermediate state relative to denatured state
m_id = m-value of intermediate to denatured transition
x = denaturant concentration (M)
R = Universal Gas Constant (kcal.mol-1.K-1)
T = Temperature (Kelvin)
Reference:
Hecky J, Muller K.M. Structural Perturbation and Compensation by Directed
Evolution at Physiological Temperature Leads to Thermostabilization of
beta-Lactamase. (2005) Biochemistry 44. pp. 12640-12654
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([1., 0.5, 0.0, 5., 1.5, 5., 1])
# NOTE (ergm) added on 3/11/2017
self.verified = True
def fit_func(self, x, Y_N, Y_I, Y_D, DGni, m_ni, DGid, m_id):
F = (Y_N + Y_I*np.exp((-DGni + m_ni*x)/core.temperature.RT) + Y_D*np.exp((-DGni + m_ni*x)/core.temperature.RT) * np.exp((-DGid + m_id*x)/core.temperature.RT)) \
/ (1 + np.exp((-DGni + m_ni*x)/core.temperature.RT) + np.exp((-DGni + m_ni*x)/core.temperature.RT) * np.exp((-DGid + m_id*x)/core.temperature.RT))
return F
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& \Upsilon_{obs} = \Upsilon_N F_N + \Upsilon_I F_I + \Upsilon_D F_D \ \\ \
\text{where:} \\ \
& F_N = \frac{1} {1 + K_{NI} + K_{NI} K_{ID}}\\ \
& F_I = \frac{K_{NI}} {1 + K_{NI} + K_{NI} K_{ID}}\\ \
& F_D = \frac{K_{NI} K_{ID}} {1 + K_{NI} + K_{NI} K_{ID}}\\ \
\text{and:} \\ \
& K_{NI} = \exp \frac{\Delta G_{NI}^{H_2O} + m_{NI} x} {RT}\\ \
& K_{ID} = \exp \frac{\Delta G_{ID}^{H_2O} + m_{ID} x} {RT}\\ \
\\ \
\text{thus:} \\ \
& \Upsilon_{obs} = \frac{ \Upsilon_N + \Upsilon_I \exp \frac {\Delta G_{NI}^{H_2O} + m_{NI} x} {RT} + \
\Upsilon_D \exp \frac{\Delta G_{NI}^{H_2O} + m_{NI} x} {RT} \cdot \exp \frac{\Delta G_{ID}^{H_2O} + m_{ID} x} {RT}} {1 + \exp \
\frac{\Delta G_{NI}^{H_2O} + m_{NI} x} {RT} + \exp \frac{\Delta G_{NI}^{H_2O} + m_{NI} x} {RT} \cdot \
\exp \frac{\Delta G_{ID}^{H_2O} + m_{ID} x} {RT}}\
\end{aligned}\
\end{equation}'
# NOTE (ergm) added on 1/8/2017
class TwoStateDimerEquilibrium(core.FitModel):
""" Two State model for a dimer denaturation Equilibrium - No Intermediate.
Folding Scheme:
N2 <-> 2D
Params:
Y_obs = spectroscopic signal at a given concentration of urea
Y_N = spectroscopic signal for native monomeric subunits at a concentration of Pt
Y_D = spectroscopic signal for denatured monomeric subunits at a concentration of Pt
alpha_N = intercept of the native baseline at low denaturation concentrations
beta_N = slope/gradient of the native baseline at low denaturation concentrations
alpha_D = intercept of the denatured baseline at high denaturation concentrations
beta_D = slope/gradient of the denatured baseline at high denaturation concentrations
F_D = fraction of unfolded monomers
K_U = Equilibrium Constant for Unfolding of dimer.
Pt = total protein concentration. This variable needs to be set per denaturation curve.
m = m-value
x = denaturant concentration (M)
d50 = denaturant midpoint (M)
R = Universal Gas Constant (kcal.mol-1.K-1)
T = Temperature (Kelvin)
Reference:
Mallam and Jackson. Folding studies on a knotted protein.
Journal of Molecular Biology (2005) vol. 346 (5) pp. 1409-1421
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([1., 0.1, 0.0, 0.1, 1.5, 5., 1e-6])
self.constants = (('Pt',1e-6),)
# NOTE (ergm) added on 3/11/2017
self.verified = True
# NOTE (ergm) added on 25/8/2017
def fit_func(self, x, alpha_N, beta_N, alpha_D, beta_D, m, d50, Pt):
K_U = np.exp(((core.temperature.RT * np.log(Pt))-m*(d50-x)) / core.temperature.RT)
F_D = (np.sqrt((np.square(K_U) + (8 * K_U * Pt))) - K_U) / (4*Pt)
Y_0 = ((alpha_N + beta_N*x)*(1-F_D)) + ((alpha_D + beta_D*x)*(F_D))
return Y_0
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& \Upsilon_{obs} = \Upsilon_N \cdot (1-F_D) + \Upsilon_D \cdot F_D \\ \
\text{where} \\ \
& \Upsilon_N = \alpha_N+\beta_N x \\ \
& \Upsilon_D = \alpha_D+\beta_D x \\ \
& F_D = \frac{\sqrt{((K_U^2 + (8 K_U Pt)) - K_U}} {4 Pt} \\ \
& K_U = \exp \frac{(RT \ln(Pt - m(d_{50} - x))} {RT}\
\end{aligned}\
\end{equation}'
# NOTE (ergm) added on 1/8/2017
# NOTE (ergm) updated Folding Scheme - was wrong 7/9/2017
class ThreeStateMonoIEquilibrium(core.FitModel):
""" Three State model for a dimer denaturation Equilibrium - Monomeric intermediate.
Folding Scheme:
N2 <-> 2I <-> 2D
Params:
Y_rel = spectroscopic signal at a given concentration of urea
Y_N = spectroscopic signal for native state
Y_D = spectroscopic signal for denatured state
Y_I = spectroscopic signal for intermediate state
F_D = fraction denatured monomers
F_N = fraction native dimers
F_I = fraction intermediate dimers
Pt = total protein concentration. This variable needs to be set per denaturation curve.
K1 = equilibrium constant of unfolding for native to intermediate state
K2 = equilibrium constant of unfolding for intermediate to denatured state
DG1 = stability of native state relative to intermediate state
m1 = m-value of native to intermediate transition
DG2 = stability of intermediate state relative to denatured state
m2 = m-value of intermediate to denatured transition
x = denaturant concentration (M)
R = Universal Gas Constant (kcal.mol-1.K-1)
T = Temperature (Kelvin)
Reference:
Mallam and Jackson. Folding studies on a knotted protein.
Journal of Molecular Biology (2005) vol. 346 (5) pp. 1409-1421
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([1., 0.1, 1.0, 0.1, 1.5, 5., 3., 1e-6])
self.constants = (('Pt',1e-6),)
# NOTE (ergm) added on 3/11/2017
self.verified = True
def fit_func(self, x, DG1, m1, DG2, m2, Y_N, Y_I, Y_D, Pt):
K1 = np.exp((-DG1 + (m1*x)) / core.temperature.RT)
K2 = np.exp((-DG2 + (m2*x)) / core.temperature.RT)
F_I = -(K1*(1+K2) + (np.sqrt(np.square(K1) * np.square(1+K2) +(8*Pt*K1)))) / (4*Pt)
Y_rel = (Y_N * ((2 * Pt * np.square(F_I))/K1)) + (Y_I * F_I) + (Y_D * (K2*F_I))
return Y_rel
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& \Upsilon_{rel} = \Upsilon_N F_N + \Upsilon_I F_I + \Upsilon_D F_D \\ \
\text{expanded:} \\ \
& \Upsilon_{rel} = \Upsilon_N \cdot \frac{2PtF_I^2} {K_1} + \Upsilon_I F_I + \Upsilon_D * K_2F_I \\ \
\\ \
\text{where:} \\ \
& F_I = \frac {- K_1 (1+K_2) + \sqrt{(K_1^2 (1+K_2)^2 + (8 Pt K_1))}} {4Pt} \\ \
& K_1 = \exp \frac{-\Delta G_{H_20}^1 + m_1 x} {RT} \\ \
& K_2 = \exp \frac{-\Delta G_{H_20}^2 + m_2 x} {RT}\
\end{aligned}\
\end{equation}'
# NOTE (ergm) added on 1/8/2017
# NOTE (ergm) updated Folding Scheme - was wrong 7/9/2017
class ThreeStateDimericIEquilibrium(core.FitModel):
""" Three State model for a dimer denaturation Equilibrium - Dimeric Intermediate.
Folding Scheme:
N2 <-> I2 <-> 2D
Params:
Y_rel = spectroscopic signal at a given concentration of urea
Y_N = spectroscopic signal for native state
Y_D = spectroscopic signal for denatured state
Y_I = spectroscopic signal for intermediate state
F_D = fraction denatured monomers
F_N = fraction native dimers
F_I = fraction intermediate dimers
Pt = total protein concentration. This variable needs to be set per denaturation curve.
K1 = equilibrium contstant of unfolding native to intermediate state
K2 = equilibrium contstant of unfolding intermediate to denatured state
DG1 = stability of native state relative to intermediate state
m1 = m-value of native to intermediate transition
DG2 = stability of intermediate state relative to denatured state
m2 = m-value of intermediate to denatured transition
x = denaturant concentration (M)
R = Universal Gas Constant (kcal.mol-1.K-1)
T = Temperature (Kelvin)
Reference:
Mallam and Jackson. Folding studies on a knotted protein.
Journal of Molecular Biology (2005) vol. 346 (5) pp. 1409-1421
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([1., 0.1, 0.0, 0.1, 1.5, 5., 2., 1e-6])
self.constants = (('Pt',1e-6),)
# NOTE (ergm) added on 3/11/2017
self.verified = True
def fit_func(self, x, DG1, m1, DG2, m2, Y_N, Y_I, Y_D, Pt):
K1 = np.exp((-DG1 + (m1*x)) / core.temperature.RT)
K2 = np.exp((-DG2 + (m2*x)) / core.temperature.RT)
F_D = (-(K1*K2) + np.sqrt(np.square(K1*K2) + 8*(1+K1)*(K1*K2)*Pt)) / (4*Pt*(1+K1))
Y_rel = (Y_N * ((2 * Pt * np.square(F_D))/(K1*K2))) + (Y_I * ((2 * Pt * np.square(F_D))/K2)) + (Y_D * F_D)
return Y_rel
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& \Upsilon_{rel} = \Upsilon_N F_N + \Upsilon_I F_I + \Upsilon_D F_D \\ \
\text{expanded:} \\ \
& \Upsilon_{rel} = \Upsilon_N \cdot \frac{2PtF_D^2} {K_1 K_2} + \Upsilon_I \frac{2PtF_D^2} {K_2} + \Upsilon_D * (F_D) \\ \
\\ \
\text{where:} \\ \
& F_D = \frac {- K_1 K_2 + \sqrt{((K_1 K_2)^2 + 8(1+K_1)(K_1 K_2)Pt)}} {4Pt (1 + K_1)} \\ \
& K_1 = \exp \frac{-\Delta G_{H_20}^1 + m_1 x} {RT} \\ \
& K_2 = \exp \frac{-\Delta G_{H_20}^2 + m_2 x} {RT}\
\end{aligned}\
\end{equation}'
class HomozipperIsingEquilibrium(core.FitModel):
""" Homopolymer Zipper Ising model
Params:
q = partition function
f = fraction of folded protein
Kappa = equilibrium constant of folding for a given repeating unit
Tau = equilibrium constant of association between 2 repeating units
n = number of repeating units
x = denaturant concentration (M)
Gi = intrinsic stability (folding energy) of a repeating unit i
mi = denaturant sensitivity of the intrinsic stability of a repeating unit i
Gi,i+1 = interface interaction energy between 2 repeating units
R = Universal Gas Constant (kcal.mol-1.K-1)
T = Temperature (Kelvin)
Reference:
Aksel and Barrick. Analysis of repeat-protein folding using
nearest-neighbor statistical mechanical models.
Methods in enzymology (2009) vol. 455 pp. 95-125
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([7, 0.1, -.53, -4.6])
self.constants = (('n',7),)
self.verified = True
def fit_func(self, x, n, DG_intrinsic, m_intrinsic, DG_interface):
# # clamp to prevent instability
# if DG_intrinsic<0. or DG_interface>0.:
# return core.FIT_ERROR(x)
k = np.exp(-(DG_intrinsic - m_intrinsic*x) / core.temperature.RT )
#t = np.exp(-(DG_interface - m_interface*x) / core.temperature.RT )
t = np.exp(-(DG_interface) / core.temperature.RT )
pre_factor = (k/(n*(k*t-1)))
numerator = n*(k*t)**(n+2) - (n+2)*(k*t)**(n+1) + (n+2)*k*t-n
denominator = (k*t-1)**2 + k*((k*t)**(n+1) - (n+1)*k*t+n )
theta = pre_factor * (numerator / denominator)
return 1.-theta
# NOTE (ergm) changed on 4/9/2017
@property
def equation(self):
return r'\text{the partition function } (q) \text{ and thus fraction of folded protein } (f) \text{ of n arrayed repeats are given by:}\\ \
\begin{equation} \\ \
\begin{aligned} \
& q = 1 + \frac{\kappa([\kappa \tau]^{n+1} - [n+1]\kappa \tau - n)} {(\kappa \tau + 1)^2} \\ \
\\ \
& f = \frac{1} {n} \sum^{n}_{i=0}i\frac{(n-i+1)\kappa^i\tau^{i-1}} {q} \\ \
\\ \
\text{where:} \\ \
& \kappa (x) = \exp\frac{-G_i} {RT} = \exp\frac{-G_{i,H_20} + m_i x} {RT} \\ \
\\ \
& \tau (x) = \exp\frac{-G_{i,i+1}} {RT} \
\end{aligned}\
\end{equation}'
class HeteropolymerIsingEquilibrium(core.FitModel):
""" Heteropolymer Ising model
Params:
q = partition function
f = fraction of folded protein
Kappa = equilibrium constant of folding for a given repeating unit
Tau = equilibrium constant of association between 2 repeating units
n = number of repeating units
x = denaturant concentration (M)
DG_intrinsic = intrinsic stability (folding energy) of a repeating unit i
m_intrinsic = denaturant sensitivity of the intrinsic stability of a repeating unit i
DG_interface = interface interaction energy between 2 repeating units
R = Universal Gas Constant (kcal.mol-1.K-1)
T = Temperature (Kelvin)
Reference:
Aksel and Barrick. Analysis of repeat-protein folding using
nearest-neighbor statistical mechanical models.
Methods in enzymology (2009) vol. 455 pp. 95-125
"""
def __init__(self):
core.FitModel.__init__(self)
def fit_func(self, x):
raise NotImplementedError('This is a dummy model.')
# NOTE (ergm) changed on 4/9/2017
@property
def equation(self):
return r'\text{the partition function } (q) \text{ and thus fraction of folded protein } (f) \text{ of n arrayed repeats are given by:} \\ \
\begin{equation} \\ \
\begin{aligned} \\ \
\kappa(x) &= \exp(-(\Delta G_{intrinsic} - m_{intrinsic}x) / RT) \\ \
\tau(x) &= \exp(-\Delta G_{interface}) / RT) \\ \
q(i) &= \
\begin{bmatrix} 0 & 1\end{bmatrix} \
\begin{bmatrix} \kappa_1\tau_{-1} & 1\\ \kappa & 1 \end{bmatrix} \
\ldots \
\begin{bmatrix} \kappa_n\tau_{n-1} & 1\\ \kappa & 1 \end{bmatrix} \
\begin{bmatrix} 1 \\ 1 \end{bmatrix} \\ \
\theta &= \frac{1}{nq(n)} \sum_{i=0}^{n}{q(i)} \
\end{aligned} \
\end{equation}'
"""
==========================================================
KINETIC FOLDING models
==========================================================
"""
class TwoStateChevron(core.FitModel):
""" Two state chevron plot.
Folding Scheme:
N <-> D
Params:
k obs = rate constant of unfolding or refolding at a particular denaturant concentration
kf = rate constant of refolding at a particular denaturant concentration
mf = the gradient of refolding arm of the chevron
ku = rate constant of unfolding at a a particular denaturant concentration
mu = the gradient of unfolding arm of the chevron
x = denaturant concentration (M)
Reference:
Jackson SE and Fersht AR. Folding of chymotrypsin inhibitor 2.
1. Evidence for a two-state transition.
Biochemistry (1991) 30(43):10428-10435.
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([100., 1.3480, 5e-4, 1.])
#self.constants = (('mf',1.76408),('mu',1.13725))
self.verified = True
def fit_func(self, x, kf, mf, ku, mu):
k_obs = kf*np.exp(-mf*x) + ku*np.exp(mu*x)
return k_obs
def error_func(self, y):
return np.log(y)
# NOTE (ergm) added on 24/8/2017
# def components(self, x, kf, mf, ku, mu):
# k_f = kf*np.exp(-mf*x)
# k_u = ku*np.exp(mu*x)
# k_obs = k_f + k_u
# return {'k_f':k_f, 'k_u':k_u}
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& k_{obs} = k_f + k_u \\ \
\\ \
\text{where:} \\ \
& k_f = k_f^{H_2O}\exp(-m_{kf}x)\\ \
& k_u = k_u^{H_2O}\exp(m_{ku}x) \\ \
\text{thus:} \\ \
& k_{obs} = k_f^{H_2O}\exp(-m_{kf}x) + k_u^{H_2O}\exp(m_{ku}x)\\ \
\end{aligned} \
\end{equation}'
class ThreeStateChevron(core.FitModel):
""" Three state chevron with single intermediate.
Folding Scheme:
N <-> I <-> D
Params:
k obs = rate constant of unfolding or refolding at a particular denaturant concentration
kfi = microscopic rate constant for the conversion of folded to intermediate
kif = microscopic rate constant for the conversion of intermediate to folded
i.e. k_if = kif(H20) * exp((mi - mif)*x)
Kiu = equilibrium constant for the rapid equilibration between intermediate & unfolded
i.e. Kiu = Kiu(H2O) * exp((mu-mi)*x)
mif = m-value associated with the kinetic transition between intermediate & folded
mi = m-value associated with the equilibrium transition between intermediate & folded
mu = m-value associated with the equilibrium transition between unfolded & folded
x = denaturant concentration (M)
Reference:
Parker et al. An integrated kinetic analysis of
intermediates and transition states in protein folding reactions.
Journal of molecular biology (1995) vol. 253 (5) pp. 771-86
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([4.5e-4, -9.5e-1, 1.3e9, -6.9, 1.4e-8, -1.6])
#self.constants = (('mif',-0.97996),('mi',-6.00355),('mu',-1.66154))
self.verified = True
def fit_func(self, x, kfi, mif, kif, mi, Kiu, mu):
k_fi = kfi*np.exp(-mif*x)
k_if = kif*np.exp((mi - mif)*x)
K_iu = Kiu*np.exp((mu - mi)*x)
k_obs = k_fi + k_if / (1.+1./K_iu)
return k_obs
def error_func(self, y):
return np.log(y)
def components(self, x, kfi, mif, kif, mi, Kiu, mu):
k_fi = kfi*np.exp(-mif*x)
k_if = kif*np.exp((mi - mif)*x)
k_obs_I = k_fi + k_if
return {'kobs_I':k_obs_I}
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& k_{obs} = \frac{k_{fi} + k_{if}} {(1+1/K_{iu})} \\ \
\\ \
\text{where:} \\ \
& k_{fi} = k_{fi}^{H_2O}\exp(-m_{fi}x)\\ \
& k_{if} = k_{if}^{H_2O}\exp((m_i - m_{if})x)\\ \
& K_{iu} = K_{iu}^{H_2O}\exp((m_u - m_i)x)\\ \
\text{thus:} \\ \
& k_{obs} = k_{fi}^{H_2O}\exp(-m_{if}x) + k_{if}^{H_2O}\exp((m_i - m_{if})x) /(1 + 1 / (K_{iu}^{H_2O}\exp((m_u-m_i)x)))\\ \
\end{aligned} \
\end{equation}'
class ThreeStateFastPhaseChevron(core.FitModel):
""" Three state chevron with single intermediate.
Folding Scheme: N <-> I <-> D
Params:
k obs = rate constant of unfolding or refolding at a particular denaturant concentration
kfi = microscopic rate constant for the conversion of folded to intermediate
kif = microscopic rate constant for the conversion of intermediate to folded
kiu = microscopic rate constant for the conversion of intermediate to unfolded
kui = microscopic rate constant for the conversion of unfolded to intermediate
Kiu = equilibrium constant for the rapid equilibration between intermediate & unfolded
mfi = m-value associated with the kinetic transition between folded & intermediate
mif = m-value associated with the kinetic transition between intermediate & folded
miu = m-value associated with the kinetic transition between intermediate & unfolded
mui = m-value associated with the kinetic transition between unfolded & intermediate
x = denaturant concentration (M)
Reference:
Parker et al. An integrated kinetic analysis of
intermediates and transition states in protein folding reactions.
Journal of molecular biology (1995) vol. 253 (5) pp. 771-86
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([172., 1.42, .445, .641, 1e4, 2.71313, 1.83e-3, 1.06])
#self.constants = (('kui',172.), ('mui',1.42), ('kiu',.445), ('miu',.641), ('mif',-2.71313),('mfi',1.06534))
self.verified = True
def fit_func(self, x, kui, mui, kiu, miu, kif, mif, kfi, mfi):
k_iu = kiu*np.exp(miu*x)
k_ui = kui*np.exp(-mui*x)
k_if = kif*np.exp(-mif*x)
k_fi = kfi*np.exp(mfi*x)
K_iu = k_iu / (k_iu+k_ui)
k_obs = k_fi + k_if / (1.+1./K_iu)
return k_obs
def error_func(self, y):
return np.log(y)
def components(self, x, kui, mui, kiu, miu, kif, mif, kfi, mfi):
k_iu = kiu*np.exp(miu*x)
k_ui = kui*np.exp(-mui*x)
k_if = kif*np.exp(-mif*x)
k_fi = kfi*np.exp(mfi*x)
k_obs_I = k_iu + k_ui
k_obs_N = k_fi + k_if
return {'kobs_I':k_obs_I} #, 'kobs_N':k_obs_N}
# NOTE (ergm) added on 23/8/2017
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& k_{obs} = \frac{k_{fi} + k_{if}} {(1+1/K_{iu})} \\ \
\\ \
\text{where:} \\ \
& k_{fi} = k_{fi}^{H_2O}\exp(m_{fi}x)\\ \
& k_{if} = k_{if}^{H_2O}\exp(-m_{if}x)\\ \
& k_{iu} = k_{iu}^{H_2O}\exp(m_{iu}x)\\ \
& k_{ui} = k_{ui}^{H_2O}\exp(-m_{ui}x)\\ \
& K_{iu} = \frac{k_{iu}} {k_{iu} + k_{ui}}\\ \
\end{aligned} \
\end{equation}'
class ThreeStateSequentialChevron(core.FitModel):
""" Three state metastable intermediate chevron plot.
Folding Scheme: N <-> I <-> D
Params:
k obs = rate constant of unfolding or refolding at a particular denaturant concentration
kfi = microscopic rate constant for the conversion of folded to intermediate
kif = microscopic rate constant for the conversion of intermediate to folded
kiu = microscopic rate constant for the conversion of intermediate to unfolded
kui = microscopic rate constant for the conversion of unfolded to intermediate
mfi = m-value associated with the kinetic transition between folded & intermediate
mif = m-value associated with the kinetic transition between intermediate & folded
miu = m-value associated with the kinetic transition between intermediate & unfolded
mui = m-value associated with the kinetic transition between unfolded & intermediate
x = denaturant concentration (M)
Reference:
Bachmann and Kiefhaber. Apparent two-state tendamistat
folding is a sequential process along a defined route.
J Mol Biol (2001) vol. 306 (2) pp. 375-386
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([2e4, 0.3480, 1e4, 0, 20.163, 1.327, 0.3033, 0.2431])
# NOTE (ergm) changed constants on 3/10/2017
self.constants = (('kiu', 1.e4),('miu',0.))
self.verified = True
def fit_func(self, x, kui, mui, kiu, miu, kif, mif, kfi, mfi):
k_ui = kui*np.exp(-mui*x)
k_iu = kiu*np.exp(miu*x)
k_if = kif*np.exp(-mif*x)
k_fi = kfi*np.exp(mfi*x)
lam_1 = -(k_ui + k_iu + k_if + k_fi)
lam_2 = k_ui * (k_if+k_fi) + k_iu*k_fi
k_obs = 0.5 * (-lam_1 - np.sqrt(lam_1**2 - 4*lam_2))
return k_obs
def error_func(self, y):
return np.log(y)
def components(self, x, kui, mui, kiu, miu, kif, mif, kfi, mfi):
k_ui = kui*np.exp(-mui*x)
k_iu = kiu*np.exp(miu*x)
k_if = kif*np.exp(-mif*x)
k_fi = kfi*np.exp(mfi*x)
k_TS1 = k_ui + (k_fi/kif)*k_iu
k_TS2 = (k_ui/k_iu)*k_if + k_fi
return {'kTS1':k_TS1, 'kTS2':k_TS2}
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& k_{obs} = 0.5(-A_2 \pm \sqrt{A_2^2 - 4A_1}) \\ \
\\ \
\text{where:}\\ \
& A_1 = -(k_{ui} + k_{iu} + k_{if} + k_{fi}) \\ \
& A_2 = k_{ui}(k_{if} + k_{fi}) + k_{iu}k_{if} \\ \
\text{and:} \\ \
& k_{fi} = k_{fi}^{H_2O}\exp(m_{fi}x)\\ \
& k_{if} = k_{if}^{H_2O}\exp(-m_{if}x)\\ \
& k_{iu} = k_{iu}^{H_2O}\exp(m_{iu}x)\\ \
& k_{ui} = k_{ui}^{H_2O}\exp(-m_{ui}x)\\ \
\end{aligned} \
\end{equation}'
class ParallelTwoStateChevron(core.FitModel):
""" Parallel Two state chevron plot.
Folding Scheme:
N <-> D
^ ^
|_____|
Params:
k obs = rate constant of unfolding or refolding at a particular denaturant concentration
k_obs_A = rate constant of unfolding or refolding of pathway A at a particular denaturant concentration
k_obs_B = rate constant of unfolding or refolding of pathway B at a particular denaturant concentration
mf_A = the gradient of refolding arm of pathway A
mf_B = the gradient of refolding arm of pathway B
mu_A = the gradient of unfolding arm of pathway A
mu_B = the gradient of unfolding arm of pathway B
x = denaturant concentration (M)
Reference:
Lowe & Itzhaki. Rational redesign of the folding pathway of a modular protein.
PNAS (2007) vol. 104 (8) pp. 2679-2684
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([50., 1.3480, 5e-4, 1., 150., 3.5])
def fit_func(self, x, kf_A, mf_A, ku_A, mu_A, kf_B, mf_B):
if mf_A < 0. or mf_B < 0. or mu_A < 0.:
return core.FIT_ERROR(x)
if kf_A <0. or ku_A <0. or kf_B < 0.:
return core.FIT_ERROR(x)
deltaG_A = kf_A / ku_A
ku_B = kf_B / deltaG_A
mu_B = np.abs(mf_A + mu_A) - np.abs(mf_B)
k_obs_A = kf_A*np.exp(-mf_A*x) + ku_A*np.exp(mu_A*x)
k_obs_B = kf_B*np.exp(-mf_B*x) + ku_B*np.exp(mu_B*x)
k_obs = k_obs_A + k_obs_B
return k_obs
def error_func(self, y):
return np.log(y)
def components(self, x, kf_A, mf_A, ku_A, mu_A, kf_B, mf_B):
deltaG_A = kf_A / ku_A
ku_B = kf_B / deltaG_A
mu_B = np.abs(mf_A + mu_A) - np.abs(mf_B)
k_obs_A = kf_A*np.exp(-mf_A*x) + ku_A*np.exp(mu_A*x)
k_obs_B = kf_B*np.exp(-mf_B*x) + ku_B*np.exp(mu_B*x)
k_obs = k_obs_A + k_obs_B
return {'kobs_A':k_obs_A, 'kobs_B':k_obs_B}
# NOTE (ergm) added on 23/8/2017
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& k_{obs} = k_{obs}^A + k_{obs}^B \\ \
\\ \
\text{where:}\\ \
& \Delta G^A = k_f^A / k_u^A \\ \
& k_u^B = k_f^B / \Delta G^A \\ \
& m_u^B = (m_f^A + m_u^A) - (m_f^B) \\ \
& k_{obs}^A = k_f^A exp(-m_f^A x) + k_u^A exp(m_u^A x) \\ \
& k_{obs}^B = k_f^B exp(-m_f^B x) + k_u^B exp(m_u^B x) \\ \
\end{aligned} \
\end{equation}'
class ParallelTwoStateUnfoldingChevron(core.FitModel):
""" Parallel Two state unfolding chevron plot.
Folding Scheme:
N -> D
| ^
|____|
Params:
k obs = rate constant of unfolding at a particular denaturant concentration
k_obs_A = rate constant of unfolding of pathway A at a particular denaturant concentration
k_obs_B = rate constant of unfolding of pathway B at a particular denaturant concentration
mu_A = the gradient of unfolding arm of pathway A
mu_B = the gradient of unfolding arm of pathway B
x = denaturant concentration (M)
Reference:
Hutton et al. Mapping the Topography of a Protein Energy Landscape.
JACS (2015) vol. 137 (46) pp. 14610-14625
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([5e-4, 1., 1e-5, 1.5])
def fit_func(self, x, ku_A, mu_A, ku_B, mu_B):
if mu_A < 0. or mu_B < 0.:
return core.FIT_ERROR(x)
k_obs_A = ku_A*np.exp(mu_A*x)
k_obs_B = ku_B*np.exp(mu_B*x)
k_obs = k_obs_A + k_obs_B
return k_obs
def error_func(self, y):
return np.log(y)
def components(self, x, ku_A, mu_A, ku_B, mu_B):
k_obs_A = ku_A*np.exp(mu_A*x)
k_obs_B = ku_B*np.exp(mu_B*x)
k_obs = k_obs_A + k_obs_B
return {'kobs_A':k_obs_A, 'kobs_B':k_obs_B}
# NOTE (ergm) added on 23/8/2017
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& k_{obs} = k_obs^A + k_obs^B \\ \
\\ \
\text{where:}\\ \
& k_obs^A = k_u^A exp(m_u^A x) \\ \
& k_obs^B = k_u^B exp(m_u^B x) \\ \
\end{aligned} \
\end{equation}'
class TwoStateChevronMovingTransition(core.FitModel):
""" Two state chevron with moving transition state.
Folding Scheme:
N <-> D
Params:
k obs = rate of unfolding or refolding at a particular denaturant concentration
kf = rate constant of refolding at a particular denaturant concentration
mf = refolding coefficient for the first order [D] term.
ku = rate constant of unfolding at a particular denaturant concentration
mu = unfolding coefficient for the first order [D] term.
m' = coefficient for the second-order [D] term (both unfolding and refolding).
x = denaturant concentration (M)
Reference:
Ternstrom et al. From snapshot to movie: phi analysis
of protein folding transition states taken one step
further. PNAS (1999) vol. 96 (26) pp. 14854-9
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
# NOTE (ergm) changed on 23/8/2017
self.default_params = np.array([5e-5, 0.2, 10., 0.2, -1.])
# NOTE (ergm) added on 3/11/2017
self.verified = True
# NOTE (ergm) changed on 23/8/2017
def fit_func(self, x, ku, mu, kf, mf, m_prime):
k_obs = ku*(np.exp(mu*x))*(np.exp(m_prime*x*x)) + kf*(np.exp(mf*x))*(np.exp(m_prime*x*x))
return k_obs
def error_func(self, y):
return np.log(y)
# NOTE (ergm) added on 23/8/2017
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& k_{obs} = k_u + k_f \\ \
\\ \
\text{where:}\\ \
& k_u = k_u^{H_2O} \cdot \exp(m_{u} x) \cdot \exp(m^{*} x^2) \\ \
& k_f = k_f^{H_2O} \cdot \exp(m_{f} x) \cdot \exp(m^{*} x^2) \\ \
\end{aligned} \
\end{equation}'
# NOTE (ergm) added on 24/8/2017 & modified on 7/11/2017
class ChevronPolynomialFit(core.FitModel):
""" Chevron fit with 2 different second order polynomials for kf & ku.
Folding Scheme:
N <-> D
Params:
k obs = rate of unfolding or refolding at a particular denaturant concentration
kf = rate constant of refolding at a particular denaturant concentration
mf & mf* = are the refolding coefficients for the first and second-order [D] terms, respectively.
ku = rate constant of unfolding at a particular denaturant concentration
mu & mu* = are the unfolding coefficients for the first and second-order [D] terms, respectively.
x = denaturant concentration (M)
Reference:
Modified version of equation found in:
Ternstrom et al. From snapshot to movie: phi analysis
of protein folding transition states taken one step
further. PNAS (1999) vol. 96 (26) pp. 14854-9
"""
def __init__(self):
core.FitModel.__init__(self)
fit_args = self.fit_func_args
self.params = tuple( [(fit_args[i],i) for i in range(len(fit_args))] )
self.default_params = np.array([5e-5, 1., -0.5, 100., 1., -0.5])
# NOTE (ergm) changed on 3/11/2017
self.verified = True
def fit_func(self, x, ku, mu, mu_prime, kf, mf, mf_prime):
k_obs = ku*(np.exp(mu*x))*(np.exp(mu_prime*x*x)) + kf*(np.exp(mf*x))*(np.exp(mf_prime*x*x))
return k_obs
def error_func(self, y):
return np.log(y)
@property
def equation(self):
return r'\begin{equation} \
\begin{aligned} \
& k_{obs} = k_u + k_f\\ \
\\ \
\text{where:}\\ \
& k_u = k_u^{H_2O} \cdot \exp(m_{u} x) \cdot \exp(m_{u}^{*} x^2) \\ \
& k_f = k_f^{H_2O} \cdot \exp(m_{f} x) \cdot \exp(m_{f}^{*} x^2) \\ \
\end{aligned} \
\end{equation}'
if __name__ == "__main__":
get_models()
| 39.161228 | 168 | 0.576753 | 5,927 | 40,806 | 3.808335 | 0.089421 | 0.013291 | 0.015107 | 0.027113 | 0.800815 | 0.774145 | 0.74876 | 0.714292 | 0.681597 | 0.670167 | 0 | 0.034589 | 0.289369 | 40,806 | 1,041 | 169 | 39.198847 | 0.74381 | 0.411484 | 0 | 0.654088 | 0 | 0.12369 | 0.013588 | 0.002056 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136268 | false | 0 | 0.012579 | 0.052411 | 0.287212 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
5dfd206ac4cdeb3302b6adea95b0b5b90808ccf0 | 10,250 | py | Python | clize/tests/test_converters.py | scholer/clize | bc15fc510fa6fb1cc27b1d27ea1b5653e61d2fff | [
"MIT"
] | 390 | 2015-04-05T01:16:35.000Z | 2022-03-30T02:13:52.000Z | clize/tests/test_converters.py | scholer/clize | bc15fc510fa6fb1cc27b1d27ea1b5653e61d2fff | [
"MIT"
] | 67 | 2015-03-04T08:15:58.000Z | 2022-03-15T00:16:51.000Z | clize/tests/test_converters.py | szaydel/clize | 84fef2080d7748dd36e465bc2048b48ed578d73f | [
"MIT"
] | 28 | 2015-01-11T04:37:08.000Z | 2021-07-07T08:20:20.000Z | # clize -- A command-line argument parser for Python
# Copyright (C) 2011-2016 by Yann Kaiser and contributors. See AUTHORS and
# COPYING for details.
from datetime import datetime
import tempfile
import shutil
import os
import stat
import sys
from six.moves import cStringIO
from sigtools import support, modifiers
from clize import parser, errors, converters
from clize.tests.util import Fixtures, Tests
class ConverterRepTests(Fixtures):
def _test(self, conv, rep):
sig = support.s('*, par: c', locals={'c': conv})
csig = parser.CliSignature.from_signature(sig)
self.assertEqual(str(csig), rep)
datetime = converters.datetime, '--par=TIME'
file = converters.file(), '--par=FILE'
class ConverterTests(Fixtures):
def _test(self, conv, inp, out):
sig = support.s('*, par: c', locals={'c': conv})
csig = parser.CliSignature.from_signature(sig)
ba = self.read_arguments(csig, ['--par', inp])
self.assertEqual(out, ba.kwargs['par'])
dt_jan1 = (
converters.datetime, '2014-01-01 12:00', datetime(2014, 1, 1, 12, 0))
class FileConverterTests(Tests):
def setUp(self):
self.temp = tempfile.mkdtemp()
self.completed = False
def tearDown(self):
shutil.rmtree(self.temp)
def run_conv(self, conv, path):
sig = support.s('*, par: c', locals={'c': conv})
csig = parser.CliSignature.from_signature(sig)
ba = self.read_arguments(csig, ['--par', path])
return ba.kwargs['par']
def test_ret_type(self):
path = os.path.join(self.temp, 'afile')
arg = self.run_conv(converters.file(mode='w'), path)
self.assertTrue(isinstance(arg, converters._FileOpener))
type(arg).__enter__
def test_file_read(self):
path = os.path.join(self.temp, 'afile')
open(path, 'w').close()
@modifiers.annotate(afile=converters.file())
def func(afile):
with afile as f:
self.assertEqual(f.name, path)
self.assertEqual(f.mode, 'r')
self.assertTrue(f.closed)
self.completed = True
o, e = self.crun(func, ['test', path])
self.assertFalse(o.getvalue())
self.assertFalse(e.getvalue())
self.assertTrue(self.completed)
def test_not_called(self):
path = os.path.join(self.temp, 'afile')
open(path, 'w').close()
@modifiers.annotate(afile=converters.file)
def func(afile):
with afile as f:
self.assertEqual(f.name, path)
self.assertEqual(f.mode, 'r')
self.assertTrue(f.closed)
self.completed = True
o, e = self.crun(func, ['test', path])
self.assertFalse(o.getvalue())
self.assertFalse(e.getvalue())
self.assertTrue(self.completed)
def test_file_write(self):
path = os.path.join(self.temp, 'afile')
@modifiers.annotate(afile=converters.file(mode='w'))
def func(afile):
self.assertFalse(os.path.exists(path))
with afile as f:
self.assertEqual(f.name, path)
self.assertEqual(f.mode, 'w')
self.assertTrue(f.closed)
self.assertTrue(os.path.exists(path))
self.completed = True
o, e = self.crun(func, ['test', path])
self.assertFalse(o.getvalue())
self.assertFalse(e.getvalue())
self.assertTrue(self.completed)
def test_file_missing(self):
path = os.path.join(self.temp, 'afile')
self.assertRaises(errors.BadArgumentFormat,
self.run_conv, converters.file(), path)
@modifiers.annotate(afile=converters.file())
def func(afile):
raise NotImplementedError
stdout, stderr = self.crun(func, ['test', path])
self.assertFalse(stdout.getvalue())
self.assertTrue(stderr.getvalue().startswith(
'test: Bad value for afile: File does not exist: '))
def test_dir_missing(self):
path = os.path.join(self.temp, 'adir/afile')
self.assertRaises(errors.BadArgumentFormat,
self.run_conv, converters.file(mode='w'), path)
@modifiers.annotate(afile=converters.file(mode='w'))
def func(afile):
raise NotImplementedError
stdout, stderr = self.crun(func, ['test', path])
self.assertFalse(stdout.getvalue())
self.assertTrue(stderr.getvalue().startswith(
'test: Bad value for afile: Directory does not exist: '))
def test_current_dir(self):
path = 'afile'
@modifiers.annotate(afile=converters.file(mode='w'))
def func(afile):
with afile as f:
self.assertEqual(f.name, path)
self.assertEqual(f.mode, 'w')
self.assertTrue(f.closed)
self.assertTrue(os.path.exists(path))
self.completed = True
with self.cd(self.temp):
stdout, stderr = self.crun(func, ['test', path])
self.assertFalse(stdout.getvalue())
self.assertFalse(stderr.getvalue())
self.assertTrue(self.completed)
def test_default_value(self):
path = os.path.join(self.temp, 'default')
open(path, 'w').close()
@modifiers.annotate(afile=converters.file())
def func(afile=path):
with afile as f:
self.assertEqual(f.name, path)
self.assertEqual(f.mode, 'r')
self.assertTrue(f.closed)
self.completed = True
stdout, stderr = self.crun(func, ['test'])
self.assertFalse(stdout.getvalue())
self.assertFalse(stderr.getvalue())
self.assertTrue(self.completed)
def test_default_none_value(self):
@modifiers.annotate(afile=converters.file())
def func(afile=None):
self.assertIs(afile, None)
self.completed = True
stdout, stderr = self.crun(func, ['test'])
self.assertFalse(stdout.getvalue())
self.assertFalse(stderr.getvalue())
self.assertTrue(self.completed)
def test_noperm_file_write(self):
path = os.path.join(self.temp, 'afile')
open(path, mode='w').close()
os.chmod(path, stat.S_IRUSR)
self.assertRaises(errors.BadArgumentFormat,
self.run_conv, converters.file(mode='w'), path)
def test_noperm_dir(self):
dpath = os.path.join(self.temp, 'adir')
path = os.path.join(self.temp, 'adir/afile')
os.mkdir(dpath)
os.chmod(dpath, stat.S_IRUSR)
self.assertRaises(errors.BadArgumentFormat,
self.run_conv, converters.file(mode='w'), path)
def test_race(self):
path = os.path.join(self.temp, 'afile')
open(path, mode='w').close()
@modifiers.annotate(afile=converters.file(mode='w'))
def func(afile):
os.chmod(path, stat.S_IRUSR)
with afile:
raise NotImplementedError
stdout, stderr = self.crun(func, ['test', path])
self.assertFalse(stdout.getvalue())
self.assertTrue(stderr.getvalue().startswith(
'test: Permission denied: '))
def test_stdin(self):
stdin = cStringIO()
@modifiers.annotate(afile=converters.file())
def func(afile):
with afile as f:
self.assertIs(f, stdin)
stdout, stderr = self.crun(func, ['test', '-'], stdin=stdin)
self.assertTrue(stdin.closed)
self.assertFalse(stdout.getvalue())
self.assertFalse(stderr.getvalue())
def test_stdout(self):
@modifiers.annotate(afile=converters.file(mode='w'))
def func(afile):
with afile as f:
self.assertIs(f, sys.stdout)
stdout, stderr = self.crun(func, ['test', '-'])
self.assertTrue(stdout.closed)
self.assertFalse(stderr.getvalue())
def test_change_sym(self):
@modifiers.annotate(afile=converters.file(stdio='gimmestdio'))
def func(afile):
with afile as f:
self.assertIs(f, sys.stdin)
stdout, stderr = self.crun(func, ['test', 'gimmestdio'])
self.assertFalse(stdout.getvalue())
self.assertFalse(stderr.getvalue())
with self.cd(self.temp):
self.assertFalse(os.path.exists('-'))
stdout, stderr = self.crun(func, ['test', '-'])
self.assertFalse(stdout.getvalue())
self.assertTrue(stderr.getvalue().startswith(
'test: Bad value for afile: File does not exist: '))
def test_no_sym(self):
@modifiers.annotate(afile=converters.file(stdio=None))
def func(afile):
raise NotImplementedError
self.assertFalse(os.path.exists('-'))
stdout, stderr = self.crun(func, ['test', '-'])
self.assertFalse(stdout.getvalue())
self.assertTrue(stderr.getvalue().startswith(
'test: Bad value for afile: File does not exist: '))
def test_stdin_no_close(self):
stdin = cStringIO()
@modifiers.annotate(afile=converters.file(keep_stdio_open=True))
def func(afile):
with afile as f:
self.assertIs(f, stdin)
stdout, stderr = self.crun(func, ['test', '-'], stdin=stdin)
self.assertFalse(stdin.closed)
self.assertFalse(stdout.getvalue())
self.assertFalse(stderr.getvalue())
def test_stdout_no_close(self):
@modifiers.annotate(afile=converters.file(mode='w', keep_stdio_open=True))
def func(afile):
with afile as f:
self.assertIs(f, sys.stdout)
stdout, stderr = self.crun(func, ['test', '-'])
self.assertFalse(stdout.closed)
self.assertFalse(stdout.getvalue())
self.assertFalse(stderr.getvalue())
class ConverterErrorTests(Fixtures):
def _test(self, conv, inp):
sig = support.s('*, par: c', locals={'c': conv})
csig = parser.CliSignature.from_signature(sig)
self.assertRaises(errors.BadArgumentFormat,
self.read_arguments, csig, ['--par', inp])
dt_baddate = converters.datetime, 'not a date'
| 37.408759 | 82 | 0.597854 | 1,186 | 10,250 | 5.110455 | 0.129005 | 0.07672 | 0.031678 | 0.042237 | 0.817027 | 0.787989 | 0.758291 | 0.748556 | 0.661937 | 0.648573 | 0 | 0.003994 | 0.26722 | 10,250 | 273 | 83 | 37.545788 | 0.802956 | 0.014049 | 0 | 0.665254 | 0 | 0 | 0.050683 | 0 | 0 | 0 | 0 | 0 | 0.317797 | 1 | 0.165254 | false | 0 | 0.042373 | 0 | 0.245763 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f8e36b2b73a5b84094c9599fcfdf1c0e925cadbc | 282 | py | Python | predicthq/endpoints/v1/accounts/endpoint.py | predicthq/sdk-py | 8b6272db3a3988aebff21f56c55ceed757fc557a | [
"MIT"
] | 33 | 2016-02-23T04:04:48.000Z | 2022-03-18T04:39:28.000Z | predicthq/endpoints/v1/accounts/endpoint.py | predicthq/sdk-py | 8b6272db3a3988aebff21f56c55ceed757fc557a | [
"MIT"
] | 55 | 2016-03-22T05:11:28.000Z | 2022-02-23T03:39:43.000Z | predicthq/endpoints/v1/accounts/endpoint.py | predicthq/sdk-py | 8b6272db3a3988aebff21f56c55ceed757fc557a | [
"MIT"
] | 9 | 2019-07-12T09:51:32.000Z | 2021-05-23T10:49:56.000Z | from predicthq.endpoints.base import BaseEndpoint
from predicthq.endpoints.decorators import returns
from .schemas import Account
class AccountsEndpoint(BaseEndpoint):
@returns(Account)
def self(self):
return self.client.get(self.build_url("v1", "accounts/self"))
| 28.2 | 69 | 0.769504 | 34 | 282 | 6.352941 | 0.617647 | 0.12037 | 0.203704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004098 | 0.134752 | 282 | 9 | 70 | 31.333333 | 0.881148 | 0 | 0 | 0 | 0 | 0 | 0.053191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0.142857 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
5d047c708524c948bede8b3ec923c1116eb2e50b | 4,928 | py | Python | deepchem/dock/tests/test_pose_generation.py | StashOfCode/deepchem | 6c5a5405acea333ee7a65a798ddb5c9df702a0b8 | [
"MIT"
] | 3 | 2019-05-29T19:18:25.000Z | 2021-01-25T05:44:05.000Z | deepchem/dock/tests/test_pose_generation.py | StashOfCode/deepchem | 6c5a5405acea333ee7a65a798ddb5c9df702a0b8 | [
"MIT"
] | 10 | 2017-02-23T19:39:22.000Z | 2017-08-31T22:21:18.000Z | deepchem/dock/tests/test_pose_generation.py | StashOfCode/deepchem | 6c5a5405acea333ee7a65a798ddb5c9df702a0b8 | [
"MIT"
] | 1 | 2020-10-06T13:31:21.000Z | 2020-10-06T13:31:21.000Z | """
Tests for Pose Generation
"""
import os
import tempfile
import unittest
import logging
import numpy as np
import deepchem as dc
import pytest
class TestPoseGeneration(unittest.TestCase):
"""
Does sanity checks on pose generation.
"""
def test_vina_initialization(self):
"""Test that VinaPoseGenerator can be initialized."""
dc.dock.VinaPoseGenerator()
def test_pocket_vina_initialization(self):
"""Test that VinaPoseGenerator can be initialized."""
pocket_finder = dc.dock.ConvexHullPocketFinder()
dc.dock.VinaPoseGenerator(pocket_finder=pocket_finder)
@pytest.mark.slow
def test_vina_poses_and_scores(self):
"""Test that VinaPoseGenerator generates poses and scores
This test takes some time to run, about a minute and a half on
development laptop.
"""
# Let's turn on logging since this test will run for a while
logging.basicConfig(level=logging.INFO)
current_dir = os.path.dirname(os.path.realpath(__file__))
protein_file = os.path.join(current_dir, "1jld_protein.pdb")
ligand_file = os.path.join(current_dir, "1jld_ligand.sdf")
vpg = dc.dock.VinaPoseGenerator(pocket_finder=None)
with tempfile.TemporaryDirectory() as tmp:
poses, scores = vpg.generate_poses(
(protein_file, ligand_file),
exhaustiveness=1,
num_modes=1,
out_dir=tmp,
generate_scores=True)
assert len(poses) == 1
assert len(scores) == 1
protein, ligand = poses[0]
from rdkit import Chem
assert isinstance(protein, Chem.Mol)
assert isinstance(ligand, Chem.Mol)
@pytest.mark.slow
def test_vina_poses_no_scores(self):
"""Test that VinaPoseGenerator generates poses.
This test takes some time to run, about a minute and a half on
development laptop.
"""
# Let's turn on logging since this test will run for a while
logging.basicConfig(level=logging.INFO)
current_dir = os.path.dirname(os.path.realpath(__file__))
protein_file = os.path.join(current_dir, "1jld_protein.pdb")
ligand_file = os.path.join(current_dir, "1jld_ligand.sdf")
vpg = dc.dock.VinaPoseGenerator(pocket_finder=None)
with tempfile.TemporaryDirectory() as tmp:
poses = vpg.generate_poses(
(protein_file, ligand_file),
exhaustiveness=1,
num_modes=1,
out_dir=tmp,
generate_scores=False)
assert len(poses) == 1
protein, ligand = poses[0]
from rdkit import Chem
assert isinstance(protein, Chem.Mol)
assert isinstance(ligand, Chem.Mol)
@pytest.mark.slow
def test_vina_pose_specified_centroid(self):
"""Test that VinaPoseGenerator creates pose files with specified centroid/box dims.
This test takes some time to run, about a minute and a half on
development laptop.
"""
# Let's turn on logging since this test will run for a while
logging.basicConfig(level=logging.INFO)
current_dir = os.path.dirname(os.path.realpath(__file__))
protein_file = os.path.join(current_dir, "1jld_protein.pdb")
ligand_file = os.path.join(current_dir, "1jld_ligand.sdf")
centroid = np.array([56.21891368, 25.95862964, 3.58950065])
box_dims = np.array([51.354, 51.243, 55.608])
vpg = dc.dock.VinaPoseGenerator(pocket_finder=None)
with tempfile.TemporaryDirectory() as tmp:
poses, scores = vpg.generate_poses(
(protein_file, ligand_file),
centroid=centroid,
box_dims=box_dims,
exhaustiveness=1,
num_modes=1,
out_dir=tmp,
generate_scores=True)
assert len(poses) == 1
assert len(scores) == 1
protein, ligand = poses[0]
from rdkit import Chem
assert isinstance(protein, Chem.Mol)
assert isinstance(ligand, Chem.Mol)
@pytest.mark.slow
def test_pocket_vina_poses(self):
"""Test that VinaPoseGenerator creates pose files.
This test is quite slow and takes about 5 minutes to run on a
development laptop.
"""
# Let's turn on logging since this test will run for a while
logging.basicConfig(level=logging.INFO)
current_dir = os.path.dirname(os.path.realpath(__file__))
protein_file = os.path.join(current_dir, "1jld_protein.pdb")
ligand_file = os.path.join(current_dir, "1jld_ligand.sdf")
# Note this may download autodock Vina...
convex_finder = dc.dock.ConvexHullPocketFinder()
vpg = dc.dock.VinaPoseGenerator(pocket_finder=convex_finder)
with tempfile.TemporaryDirectory() as tmp:
poses, scores = vpg.generate_poses(
(protein_file, ligand_file),
exhaustiveness=1,
num_modes=1,
num_pockets=2,
out_dir=tmp,
generate_scores=True)
assert len(poses) == 2
assert len(scores) == 2
from rdkit import Chem
for pose in poses:
protein, ligand = pose
assert isinstance(protein, Chem.Mol)
assert isinstance(ligand, Chem.Mol)
| 33.073826 | 87 | 0.69237 | 660 | 4,928 | 5.016667 | 0.192424 | 0.028994 | 0.024162 | 0.033827 | 0.791302 | 0.780731 | 0.769254 | 0.701903 | 0.701903 | 0.651465 | 0 | 0.01859 | 0.214083 | 4,928 | 148 | 88 | 33.297297 | 0.836303 | 0.203125 | 0 | 0.69697 | 0 | 0 | 0.032444 | 0 | 0 | 0 | 0 | 0.006757 | 0.151515 | 1 | 0.060606 | false | 0 | 0.111111 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
5d364599578e984177c15e0952be5d9a8a30bcc2 | 148 | py | Python | api/api/schemas/geo_schema.py | arunrapolu4491/court-interpreter-scheduling | 17efcdf3a7fdd470c1991452a696a7bc640fd220 | [
"Apache-2.0"
] | null | null | null | api/api/schemas/geo_schema.py | arunrapolu4491/court-interpreter-scheduling | 17efcdf3a7fdd470c1991452a696a7bc640fd220 | [
"Apache-2.0"
] | null | null | null | api/api/schemas/geo_schema.py | arunrapolu4491/court-interpreter-scheduling | 17efcdf3a7fdd470c1991452a696a7bc640fd220 | [
"Apache-2.0"
] | 1 | 2022-03-18T18:47:23.000Z | 2022-03-18T18:47:23.000Z | from pydantic import BaseModel
from typing import Optional
class GeoUpdateScheduleRequestSchema(BaseModel):
update_schedule: Optional[str]
| 24.666667 | 48 | 0.817568 | 15 | 148 | 8 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141892 | 148 | 6 | 49 | 24.666667 | 0.944882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5d56c124055f09de27277cefc3229ccf7cf0186c | 273 | py | Python | tool_hub/data_reshaper.py | WeijieChen2017/MMIPS | 1e4cd1f1d046b50e73233065acb4ecc5150bff31 | [
"MIT"
] | null | null | null | tool_hub/data_reshaper.py | WeijieChen2017/MMIPS | 1e4cd1f1d046b50e73233065acb4ecc5150bff31 | [
"MIT"
] | null | null | null | tool_hub/data_reshaper.py | WeijieChen2017/MMIPS | 1e4cd1f1d046b50e73233065acb4ecc5150bff31 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: UTF-8 -*-
from basic_class.basic_class import basic_class
class data_reshaper(basic_class):
def __init__(self, shape_info):
super().__init__()
self.shape_info = shape_info
def apply(self, input_data):
pass
| 21 | 47 | 0.666667 | 37 | 273 | 4.459459 | 0.567568 | 0.242424 | 0.157576 | 0.206061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004651 | 0.212454 | 273 | 12 | 48 | 22.75 | 0.762791 | 0.139194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
5d74853d7c1cf447efc021d294475be7beabca59 | 180 | py | Python | neutronpy/fileio/loaders/__init__.py | neutronpy/neutronpy | 44ca74a0bef25c03397a77aafb359bb257de1fe6 | [
"MIT"
] | 14 | 2015-05-08T02:43:46.000Z | 2019-05-28T03:47:32.000Z | neutronpy/fileio/loaders/__init__.py | neutronpy/neutronpy | 44ca74a0bef25c03397a77aafb359bb257de1fe6 | [
"MIT"
] | 96 | 2015-02-09T01:04:33.000Z | 2020-12-08T22:57:37.000Z | neutronpy/fileio/loaders/__init__.py | neutronpy/neutronpy | 44ca74a0bef25c03397a77aafb359bb257de1fe6 | [
"MIT"
] | 5 | 2016-02-26T22:53:13.000Z | 2018-07-16T07:13:04.000Z | from .dcs_mslice import DcsMslice
from .grasp import Grasp
from .ice import Ice
from .icp import Icp
from .mad import Mad
from .neutronpy import Neutronpy
from .spice import Spice
| 22.5 | 33 | 0.805556 | 29 | 180 | 4.965517 | 0.37931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 180 | 7 | 34 | 25.714286 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
538c72977f0813e45d1c69956e66b49c7c2b71da | 123 | py | Python | twitterAnalyzer/sentiment/admin.py | bherr006/CS179G | fde8b840593dbe908c3c93d0b455873e261c7a71 | [
"MIT"
] | 2 | 2018-10-23T18:52:35.000Z | 2018-11-30T14:02:06.000Z | twitterAnalyzer/sentiment/admin.py | jzena001/CS179G | 5de23baa6160464d2361f07fb4b2bec9498beab3 | [
"MIT"
] | null | null | null | twitterAnalyzer/sentiment/admin.py | jzena001/CS179G | 5de23baa6160464d2361f07fb4b2bec9498beab3 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Tweet
# Register your models here.
admin.site.register(Tweet)
| 17.571429 | 33 | 0.756098 | 17 | 123 | 5.470588 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178862 | 123 | 6 | 34 | 20.5 | 0.920792 | 0.211382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5393b11cdee304aa87af79b0c4fba41174983e4d | 205 | py | Python | backend/currency_exchanger/currencies/tasks.py | norbertcyran/currency-exchanger | 8896c1ad3981662d6ca0395e4c0aba6ac93f9eac | [
"MIT"
] | null | null | null | backend/currency_exchanger/currencies/tasks.py | norbertcyran/currency-exchanger | 8896c1ad3981662d6ca0395e4c0aba6ac93f9eac | [
"MIT"
] | null | null | null | backend/currency_exchanger/currencies/tasks.py | norbertcyran/currency-exchanger | 8896c1ad3981662d6ca0395e4c0aba6ac93f9eac | [
"MIT"
] | null | null | null | import logging
from celery import shared_task
from .rates import update_currency_rates
logger = logging.getLogger(__name__)
@shared_task
def update_currency_rates_async():
update_currency_rates()
| 15.769231 | 40 | 0.819512 | 27 | 205 | 5.740741 | 0.518519 | 0.270968 | 0.367742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126829 | 205 | 12 | 41 | 17.083333 | 0.865922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5394ddcfef4157e4062e02ba815d2c0fde91ce9d | 101 | py | Python | test.py | susoo/igbot | e9556a8ee76deac07c500f47337f0b5d759edd9b | [
"Apache-2.0"
] | 1 | 2021-10-21T18:33:48.000Z | 2021-10-21T18:33:48.000Z | test.py | susoo/igbot | e9556a8ee76deac07c500f47337f0b5d759edd9b | [
"Apache-2.0"
] | null | null | null | test.py | susoo/igbot | e9556a8ee76deac07c500f47337f0b5d759edd9b | [
"Apache-2.0"
] | null | null | null | import instabot
bot = instabot.Bot()
bot.login(username='francoz_pablo', password='Pablocare2019') | 20.2 | 61 | 0.772277 | 12 | 101 | 6.416667 | 0.75 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.089109 | 101 | 5 | 61 | 20.2 | 0.793478 | 0 | 0 | 0 | 0 | 0 | 0.254902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.333333 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 5 |
53a9cea90ec91d47d523245e4d9f3aede504cf23 | 44,786 | py | Python | fn_cve_search/fn_cve_search/util/customize.py | rudimeyer/resilient-community-apps | 7a46841ba41fa7a1c421d4b392b0a3ca9e36bd00 | [
"MIT"
] | 1 | 2020-08-25T03:43:07.000Z | 2020-08-25T03:43:07.000Z | fn_cve_search/fn_cve_search/util/customize.py | rudimeyer/resilient-community-apps | 7a46841ba41fa7a1c421d4b392b0a3ca9e36bd00 | [
"MIT"
] | 1 | 2019-07-08T16:57:48.000Z | 2019-07-08T16:57:48.000Z | fn_cve_search/fn_cve_search/util/customize.py | rudimeyer/resilient-community-apps | 7a46841ba41fa7a1c421d4b392b0a3ca9e36bd00 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Generate the Resilient customizations required for fn_cve_search"""
from __future__ import print_function
from resilient_circuits.util import *
def codegen_reload_data():
"""Parameters to codegen used to generate the fn_cve_search package"""
reload_params = {"package": u"fn_cve_search",
"incident_fields": [],
"action_fields": [u"cve_id", u"cve_product", u"cve_published_date_from", u"cve_published_date_to", u"cve_vendor"],
"function_params": [u"cve_browse_criteria", u"cve_id", u"cve_product", u"cve_published_date_from", u"cve_published_date_to", u"cve_vendor"],
"datatables": [u"cve_data"],
"message_destinations": [u"fn_cve"],
"functions": [u"function_cve_browse", u"function_cve_search"],
"phases": [],
"automatic_tasks": [],
"scripts": [],
"workflows": [u"example_cve_browse", u"example_cve_search"],
"actions": [u"Example: CVE Browse", u"Example: CVE Search"]
}
return reload_params
def customization_data(client=None):
"""Produce any customization definitions (types, fields, message destinations, etc)
that should be installed by `resilient-circuits customize`
"""
# This import data contains:
# Action fields:
# cve_id
# cve_product
# cve_published_date_from
# cve_published_date_to
# cve_vendor
# Function inputs:
# cve_browse_criteria
# cve_id
# cve_product
# cve_published_date_from
# cve_published_date_to
# cve_vendor
# DataTables:
# cve_data
# Message Destinations:
# fn_cve
# Functions:
# function_cve_browse
# function_cve_search
# Workflows:
# example_cve_browse
# example_cve_search
# Rules:
# Example: CVE Browse
# Example: CVE Search
yield ImportDefinition(u"""
eyJzZXJ2ZXJfdmVyc2lvbiI6IHsibWFqb3IiOiAzMSwgIm1pbm9yIjogMCwgImJ1aWxkX251bWJl
ciI6IDQyNTQsICJ2ZXJzaW9uIjogIjMxLjAuNDI1NCJ9LCAiZXhwb3J0X2Zvcm1hdF92ZXJzaW9u
IjogMiwgImlkIjogMjcsICJleHBvcnRfZGF0ZSI6IDE1NTM2Mjg4Njk2MzMsICJmaWVsZHMiOiBb
eyJpZCI6IDUxLCAibmFtZSI6ICJpbmNfdHJhaW5pbmciLCAidGV4dCI6ICJTaW11bGF0aW9uIiwg
InByZWZpeCI6IG51bGwsICJ0eXBlX2lkIjogMCwgInRvb2x0aXAiOiAiV2hldGhlciB0aGUgaW5j
aWRlbnQgaXMgYSBzaW11bGF0aW9uIG9yIGEgcmVndWxhciBpbmNpZGVudC4gIFRoaXMgZmllbGQg
aXMgcmVhZC1vbmx5LiIsICJpbnB1dF90eXBlIjogImJvb2xlYW4iLCAiaGlkZV9ub3RpZmljYXRp
b24iOiBmYWxzZSwgImNob3NlbiI6IGZhbHNlLCAiZGVmYXVsdF9jaG9zZW5fYnlfc2VydmVyIjog
ZmFsc2UsICJibGFua19vcHRpb24iOiBmYWxzZSwgImludGVybmFsIjogZmFsc2UsICJ1dWlkIjog
ImMzZjBlM2VkLTIxZTEtNGQ1My1hZmZiLWZlNWNhMzMwOGNjYSIsICJvcGVyYXRpb25zIjogW10s
ICJvcGVyYXRpb25fcGVybXMiOiB7fSwgInZhbHVlcyI6IFtdLCAicmVhZF9vbmx5IjogdHJ1ZSwg
ImNoYW5nZWFibGUiOiB0cnVlLCAicmljaF90ZXh0IjogZmFsc2UsICJleHBvcnRfa2V5IjogImlu
Y2lkZW50L2luY190cmFpbmluZyIsICJ0ZW1wbGF0ZXMiOiBbXSwgImRlcHJlY2F0ZWQiOiBmYWxz
ZX0sIHsiaWQiOiAyOTMsICJuYW1lIjogImN2ZV9wdWJsaXNoZWRfZGF0ZV90byIsICJ0ZXh0Ijog
IkNWRSBQdWJsaXNoZWQgRGF0ZSBUbyIsICJwcmVmaXgiOiAicHJvcGVydGllcyIsICJ0eXBlX2lk
IjogNiwgInRvb2x0aXAiOiAiRW5kIGRhdGUgcmFuZ2UgdG8gc2VhcmNoIGN2ZSBkYXRhIE5vdGUg
OiBzZWxlY3RpbmcgZGF0ZSByYW5nZSBsaW1pdHMgc2VhcmNoIHRvIHNwZWNpZmllZCByYW5nZSBv
ZiBkYXRlcyIsICJwbGFjZWhvbGRlciI6ICIiLCAiaW5wdXRfdHlwZSI6ICJkYXRlcGlja2VyIiwg
ImhpZGVfbm90aWZpY2F0aW9uIjogZmFsc2UsICJjaG9zZW4iOiBmYWxzZSwgImRlZmF1bHRfY2hv
c2VuX2J5X3NlcnZlciI6IGZhbHNlLCAiYmxhbmtfb3B0aW9uIjogZmFsc2UsICJpbnRlcm5hbCI6
IGZhbHNlLCAidXVpZCI6ICJhMTRmZTM2OC1jNjQxLTQ3ZjMtOTAxYi0yZWQ5MjgyMTMzYmIiLCAi
b3BlcmF0aW9ucyI6IFtdLCAib3BlcmF0aW9uX3Blcm1zIjoge30sICJ2YWx1ZXMiOiBbXSwgInJl
YWRfb25seSI6IGZhbHNlLCAiY2hhbmdlYWJsZSI6IHRydWUsICJyaWNoX3RleHQiOiBmYWxzZSwg
ImV4cG9ydF9rZXkiOiAiYWN0aW9uaW52b2NhdGlvbi9jdmVfcHVibGlzaGVkX2RhdGVfdG8iLCAi
dGVtcGxhdGVzIjogW10sICJkZXByZWNhdGVkIjogZmFsc2V9LCB7ImlkIjogMjk3LCAibmFtZSI6
ICJjdmVfcHJvZHVjdCIsICJ0ZXh0IjogIkNWRSBQcm9kdWN0IiwgInByZWZpeCI6ICJwcm9wZXJ0
aWVzIiwgInR5cGVfaWQiOiA2LCAidG9vbHRpcCI6ICJOYW1lIG9mIHRoZSBQcm9kdWN0IHRvIFNl
YXJjaCBpbiBDVkUgRGF0YWJhc2UiLCAicGxhY2Vob2xkZXIiOiAiIiwgImlucHV0X3R5cGUiOiAi
dGV4dCIsICJoaWRlX25vdGlmaWNhdGlvbiI6IGZhbHNlLCAiY2hvc2VuIjogZmFsc2UsICJkZWZh
dWx0X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwgImJsYW5rX29wdGlvbiI6IGZhbHNlLCAiaW50
ZXJuYWwiOiBmYWxzZSwgInV1aWQiOiAiZjVjMDZiZGYtYTQxZS00MWE5LWI5MmEtMGFjMDU3MjY2
NDMzIiwgIm9wZXJhdGlvbnMiOiBbXSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidmFsdWVzIjog
W10sICJyZWFkX29ubHkiOiBmYWxzZSwgImNoYW5nZWFibGUiOiB0cnVlLCAicmljaF90ZXh0Ijog
ZmFsc2UsICJleHBvcnRfa2V5IjogImFjdGlvbmludm9jYXRpb24vY3ZlX3Byb2R1Y3QiLCAidGVt
cGxhdGVzIjogW10sICJkZXByZWNhdGVkIjogZmFsc2V9LCB7ImlkIjogMjk4LCAibmFtZSI6ICJj
dmVfaWQiLCAidGV4dCI6ICJDVkUgSUQiLCAicHJlZml4IjogInByb3BlcnRpZXMiLCAidHlwZV9p
ZCI6IDYsICJ0b29sdGlwIjogIlVzZSAnU3BlY2lmaWMgQ1ZFIElEJyB0byBzZWFyY2ggdGhlIGRh
dGEgZnJvbSBDVkUgRGF0YWJhc2UiLCAicGxhY2Vob2xkZXIiOiAiIiwgImlucHV0X3R5cGUiOiAi
dGV4dCIsICJoaWRlX25vdGlmaWNhdGlvbiI6IGZhbHNlLCAiY2hvc2VuIjogZmFsc2UsICJkZWZh
dWx0X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwgImJsYW5rX29wdGlvbiI6IGZhbHNlLCAiaW50
ZXJuYWwiOiBmYWxzZSwgInV1aWQiOiAiOWVlN2M4M2MtMGYyMS00NjFkLTgxNGQtOWRmNGI5YzBj
MTU2IiwgIm9wZXJhdGlvbnMiOiBbXSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidmFsdWVzIjog
W10sICJyZWFkX29ubHkiOiBmYWxzZSwgImNoYW5nZWFibGUiOiB0cnVlLCAicmljaF90ZXh0Ijog
ZmFsc2UsICJleHBvcnRfa2V5IjogImFjdGlvbmludm9jYXRpb24vY3ZlX2lkIiwgInRlbXBsYXRl
cyI6IFtdLCAiZGVwcmVjYXRlZCI6IGZhbHNlfSwgeyJpZCI6IDI5NiwgIm5hbWUiOiAiY3ZlX3Zl
bmRvciIsICJ0ZXh0IjogIkNWRSBWZW5kb3IiLCAicHJlZml4IjogInByb3BlcnRpZXMiLCAidHlw
ZV9pZCI6IDYsICJ0b29sdGlwIjogIkEgdmVuZG9yIG5hbWUgdG8gYnJvd3NlIGZvciBjdmUncyIs
ICJwbGFjZWhvbGRlciI6ICIiLCAiaW5wdXRfdHlwZSI6ICJ0ZXh0IiwgImhpZGVfbm90aWZpY2F0
aW9uIjogZmFsc2UsICJjaG9zZW4iOiBmYWxzZSwgImRlZmF1bHRfY2hvc2VuX2J5X3NlcnZlciI6
IGZhbHNlLCAiYmxhbmtfb3B0aW9uIjogZmFsc2UsICJpbnRlcm5hbCI6IGZhbHNlLCAidXVpZCI6
ICJiMzUyZDY1ZS1kOGY5LTQ1Y2EtODFhZS01ZmJhOWMyZGNjMGYiLCAib3BlcmF0aW9ucyI6IFtd
LCAib3BlcmF0aW9uX3Blcm1zIjoge30sICJ2YWx1ZXMiOiBbXSwgInJlYWRfb25seSI6IGZhbHNl
LCAiY2hhbmdlYWJsZSI6IHRydWUsICJyaWNoX3RleHQiOiBmYWxzZSwgImV4cG9ydF9rZXkiOiAi
YWN0aW9uaW52b2NhdGlvbi9jdmVfdmVuZG9yIiwgInRlbXBsYXRlcyI6IFtdLCAiZGVwcmVjYXRl
ZCI6IGZhbHNlfSwgeyJpZCI6IDI5MiwgIm5hbWUiOiAiY3ZlX3B1Ymxpc2hlZF9kYXRlX2Zyb20i
LCAidGV4dCI6ICJDVkUgUHVibGlzaGVkIERhdGUgRnJvbSIsICJwcmVmaXgiOiAicHJvcGVydGll
cyIsICJ0eXBlX2lkIjogNiwgInRvb2x0aXAiOiAiU2VsZWN0IENWRSBQdWJsaXNoZWQgRGF0ZSAg
Tm90ZSA6IHNlbGVjdGluZyBkYXRlIHJhbmdlIGxpbWl0cyBzZWFyY2ggdG8gc3BlY2lmaWVkIHJh
bmdlIG9mIGRhdGVzIiwgInBsYWNlaG9sZGVyIjogIiIsICJpbnB1dF90eXBlIjogImRhdGVwaWNr
ZXIiLCAiaGlkZV9ub3RpZmljYXRpb24iOiBmYWxzZSwgImNob3NlbiI6IGZhbHNlLCAiZGVmYXVs
dF9jaG9zZW5fYnlfc2VydmVyIjogZmFsc2UsICJibGFua19vcHRpb24iOiBmYWxzZSwgImludGVy
bmFsIjogZmFsc2UsICJ1dWlkIjogIjVmYWU5ZTc5LTExMTgtNDE5NS1iZWRiLWVjYzQwYzQyYTY3
OSIsICJvcGVyYXRpb25zIjogW10sICJvcGVyYXRpb25fcGVybXMiOiB7fSwgInZhbHVlcyI6IFtd
LCAicmVhZF9vbmx5IjogZmFsc2UsICJjaGFuZ2VhYmxlIjogdHJ1ZSwgInJpY2hfdGV4dCI6IGZh
bHNlLCAiZXhwb3J0X2tleSI6ICJhY3Rpb25pbnZvY2F0aW9uL2N2ZV9wdWJsaXNoZWRfZGF0ZV9m
cm9tIiwgInRlbXBsYXRlcyI6IFtdLCAiZGVwcmVjYXRlZCI6IGZhbHNlfSwgeyJpZCI6IDI5OSwg
Im5hbWUiOiAiY3ZlX2lkIiwgInRleHQiOiAiY3ZlX2lkIiwgInByZWZpeCI6IG51bGwsICJ0eXBl
X2lkIjogMTEsICJ0b29sdGlwIjogIkNWRSBJRCIsICJwbGFjZWhvbGRlciI6ICJDVkUtMjAwOC0z
OTQ5IiwgImlucHV0X3R5cGUiOiAidGV4dCIsICJoaWRlX25vdGlmaWNhdGlvbiI6IGZhbHNlLCAi
Y2hvc2VuIjogZmFsc2UsICJkZWZhdWx0X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwgImJsYW5r
X29wdGlvbiI6IGZhbHNlLCAiaW50ZXJuYWwiOiBmYWxzZSwgInV1aWQiOiAiYjZmNjI3MTAtZjE3
Mi00Njg3LWI2OWItNDgwNjIyMGEyZDUyIiwgIm9wZXJhdGlvbnMiOiBbXSwgIm9wZXJhdGlvbl9w
ZXJtcyI6IHt9LCAidmFsdWVzIjogW10sICJyZWFkX29ubHkiOiBmYWxzZSwgImNoYW5nZWFibGUi
OiB0cnVlLCAicmljaF90ZXh0IjogZmFsc2UsICJleHBvcnRfa2V5IjogIl9fZnVuY3Rpb24vY3Zl
X2lkIiwgInRlbXBsYXRlcyI6IFtdLCAiZGVwcmVjYXRlZCI6IGZhbHNlfSwgeyJpZCI6IDMwMywg
Im5hbWUiOiAiY3ZlX3B1Ymxpc2hlZF9kYXRlX3RvIiwgInRleHQiOiAiY3ZlX3B1Ymxpc2hlZF9k
YXRlX3RvIiwgInByZWZpeCI6IG51bGwsICJ0eXBlX2lkIjogMTEsICJ0b29sdGlwIjogIkVuZCBk
YXRlIHJhbmdlIHRvIHNlYXJjaCBjdmUgZGF0YSIsICJwbGFjZWhvbGRlciI6ICIiLCAiaW5wdXRf
dHlwZSI6ICJkYXRlcGlja2VyIiwgImhpZGVfbm90aWZpY2F0aW9uIjogZmFsc2UsICJjaG9zZW4i
OiBmYWxzZSwgImRlZmF1bHRfY2hvc2VuX2J5X3NlcnZlciI6IGZhbHNlLCAiYmxhbmtfb3B0aW9u
IjogZmFsc2UsICJpbnRlcm5hbCI6IGZhbHNlLCAidXVpZCI6ICI1OWQ4ZGViMC03NzIxLTQyOGEt
ODJkZC1mYTQ4NjJjZWMxYTkiLCAib3BlcmF0aW9ucyI6IFtdLCAib3BlcmF0aW9uX3Blcm1zIjog
e30sICJ2YWx1ZXMiOiBbXSwgInJlYWRfb25seSI6IGZhbHNlLCAiY2hhbmdlYWJsZSI6IHRydWUs
ICJyaWNoX3RleHQiOiBmYWxzZSwgImV4cG9ydF9rZXkiOiAiX19mdW5jdGlvbi9jdmVfcHVibGlz
aGVkX2RhdGVfdG8iLCAidGVtcGxhdGVzIjogW10sICJkZXByZWNhdGVkIjogZmFsc2V9LCB7Imlk
IjogMzA1LCAibmFtZSI6ICJjdmVfYnJvd3NlX2NyaXRlcmlhIiwgInRleHQiOiAiY3ZlX2Jyb3dz
ZV9jcml0ZXJpYSIsICJwcmVmaXgiOiBudWxsLCAidHlwZV9pZCI6IDExLCAidG9vbHRpcCI6ICJD
VkUgQnJvd3NlIENyaXRlcmlhIGkuZSBCcm93c2UoRm9yIFZlbmRvcnMgJiBQcm9kdWN0KSxDVkUg
REIgSW5mbyhnZXQgaW5mb3JtYXRpb24gYWJvdXQgY3VycmVudCBjdmUgZGF0YWJhc2UpIiwgInBs
YWNlaG9sZGVyIjogIiIsICJpbnB1dF90eXBlIjogInRleHQiLCAicmVxdWlyZWQiOiAiYWx3YXlz
IiwgImhpZGVfbm90aWZpY2F0aW9uIjogZmFsc2UsICJjaG9zZW4iOiBmYWxzZSwgImRlZmF1bHRf
Y2hvc2VuX2J5X3NlcnZlciI6IGZhbHNlLCAiYmxhbmtfb3B0aW9uIjogZmFsc2UsICJpbnRlcm5h
bCI6IGZhbHNlLCAidXVpZCI6ICI5NzcwMmQyMS0yOGFlLTRmNmUtODYzNi1lNzI4NTVhMzE4YmEi
LCAib3BlcmF0aW9ucyI6IFtdLCAib3BlcmF0aW9uX3Blcm1zIjoge30sICJ2YWx1ZXMiOiBbXSwg
InJlYWRfb25seSI6IGZhbHNlLCAiY2hhbmdlYWJsZSI6IHRydWUsICJyaWNoX3RleHQiOiBmYWxz
ZSwgImV4cG9ydF9rZXkiOiAiX19mdW5jdGlvbi9jdmVfYnJvd3NlX2NyaXRlcmlhIiwgInRlbXBs
YXRlcyI6IFtdLCAiZGVwcmVjYXRlZCI6IGZhbHNlfSwgeyJpZCI6IDMwMiwgIm5hbWUiOiAiY3Zl
X3ZlbmRvciIsICJ0ZXh0IjogImN2ZV92ZW5kb3IiLCAicHJlZml4IjogbnVsbCwgInR5cGVfaWQi
OiAxMSwgInRvb2x0aXAiOiAiYSB2ZW5kb3IgbmFtZSB0byBicm93c2UgZm9yIGN2ZSIsICJwbGFj
ZWhvbGRlciI6ICIiLCAiaW5wdXRfdHlwZSI6ICJ0ZXh0IiwgImhpZGVfbm90aWZpY2F0aW9uIjog
ZmFsc2UsICJjaG9zZW4iOiBmYWxzZSwgImRlZmF1bHRfY2hvc2VuX2J5X3NlcnZlciI6IGZhbHNl
LCAiYmxhbmtfb3B0aW9uIjogZmFsc2UsICJpbnRlcm5hbCI6IGZhbHNlLCAidXVpZCI6ICJmMWE1
Yzc5MC1mYWE0LTRhYTgtYmI0Mi00ZjE3NDkxZmY4NDUiLCAib3BlcmF0aW9ucyI6IFtdLCAib3Bl
cmF0aW9uX3Blcm1zIjoge30sICJ2YWx1ZXMiOiBbXSwgInJlYWRfb25seSI6IGZhbHNlLCAiY2hh
bmdlYWJsZSI6IHRydWUsICJyaWNoX3RleHQiOiBmYWxzZSwgImV4cG9ydF9rZXkiOiAiX19mdW5j
dGlvbi9jdmVfdmVuZG9yIiwgInRlbXBsYXRlcyI6IFtdLCAiZGVwcmVjYXRlZCI6IGZhbHNlfSwg
eyJpZCI6IDMwNCwgIm5hbWUiOiAiY3ZlX3Byb2R1Y3QiLCAidGV4dCI6ICJjdmVfcHJvZHVjdCIs
ICJwcmVmaXgiOiBudWxsLCAidHlwZV9pZCI6IDExLCAidG9vbHRpcCI6ICJOYW1lIG9mIHRoZSBQ
cm9kdWN0IHRvIFNlYXJjaCBpbiBDVkUgRGF0YWJhc2UiLCAicGxhY2Vob2xkZXIiOiAiIiwgImlu
cHV0X3R5cGUiOiAidGV4dCIsICJoaWRlX25vdGlmaWNhdGlvbiI6IGZhbHNlLCAiY2hvc2VuIjog
ZmFsc2UsICJkZWZhdWx0X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwgImJsYW5rX29wdGlvbiI6
IGZhbHNlLCAiaW50ZXJuYWwiOiBmYWxzZSwgInV1aWQiOiAiNGFlNTkyMTMtYmUzNy00ZTA3LWE3
ZWEtMGM0MDkzYWRlNzMwIiwgIm9wZXJhdGlvbnMiOiBbXSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9
LCAidmFsdWVzIjogW10sICJyZWFkX29ubHkiOiBmYWxzZSwgImNoYW5nZWFibGUiOiB0cnVlLCAi
cmljaF90ZXh0IjogZmFsc2UsICJleHBvcnRfa2V5IjogIl9fZnVuY3Rpb24vY3ZlX3Byb2R1Y3Qi
LCAidGVtcGxhdGVzIjogW10sICJkZXByZWNhdGVkIjogZmFsc2V9LCB7ImlkIjogMzA2LCAibmFt
ZSI6ICJjdmVfcHVibGlzaGVkX2RhdGVfZnJvbSIsICJ0ZXh0IjogImN2ZV9wdWJsaXNoZWRfZGF0
ZV9mcm9tIiwgInByZWZpeCI6IG51bGwsICJ0eXBlX2lkIjogMTEsICJ0b29sdGlwIjogIlNlbGVj
dCBDVkUgUHVibGlzaGVkIERhdGUiLCAicGxhY2Vob2xkZXIiOiAiIiwgImlucHV0X3R5cGUiOiAi
ZGF0ZXBpY2tlciIsICJoaWRlX25vdGlmaWNhdGlvbiI6IGZhbHNlLCAiY2hvc2VuIjogZmFsc2Us
ICJkZWZhdWx0X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwgImJsYW5rX29wdGlvbiI6IGZhbHNl
LCAiaW50ZXJuYWwiOiBmYWxzZSwgInV1aWQiOiAiMTEyNTQ4ZTgtYTRmOC00OTA2LWJkMmYtYTc5
ZGM2ZDQ2ZGE1IiwgIm9wZXJhdGlvbnMiOiBbXSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidmFs
dWVzIjogW10sICJyZWFkX29ubHkiOiBmYWxzZSwgImNoYW5nZWFibGUiOiB0cnVlLCAicmljaF90
ZXh0IjogZmFsc2UsICJleHBvcnRfa2V5IjogIl9fZnVuY3Rpb24vY3ZlX3B1Ymxpc2hlZF9kYXRl
X2Zyb20iLCAidGVtcGxhdGVzIjogW10sICJkZXByZWNhdGVkIjogZmFsc2V9XSwgImluY2lkZW50
X3R5cGVzIjogW3sidXBkYXRlX2RhdGUiOiAxNTUzNjI5ODMwMDcxLCAiY3JlYXRlX2RhdGUiOiAx
NTUzNjI5ODMwMDcxLCAidXVpZCI6ICJiZmVlYzJkNC0zNzcwLTExZTgtYWQzOS00YTAwMDQwNDRh
YTAiLCAiZGVzY3JpcHRpb24iOiAiQ3VzdG9taXphdGlvbiBQYWNrYWdlcyAoaW50ZXJuYWwpIiwg
ImV4cG9ydF9rZXkiOiAiQ3VzdG9taXphdGlvbiBQYWNrYWdlcyAoaW50ZXJuYWwpIiwgIm5hbWUi
OiAiQ3VzdG9taXphdGlvbiBQYWNrYWdlcyAoaW50ZXJuYWwpIiwgImVuYWJsZWQiOiBmYWxzZSwg
InN5c3RlbSI6IGZhbHNlLCAicGFyZW50X2lkIjogbnVsbCwgImhpZGRlbiI6IGZhbHNlLCAiaWQi
OiAwfV0sICJwaGFzZXMiOiBbXSwgImF1dG9tYXRpY190YXNrcyI6IFtdLCAib3ZlcnJpZGVzIjog
W10sICJtZXNzYWdlX2Rlc3RpbmF0aW9ucyI6IFt7Im5hbWUiOiAiZm5fY3ZlIiwgInByb2dyYW1t
YXRpY19uYW1lIjogImZuX2N2ZSIsICJkZXN0aW5hdGlvbl90eXBlIjogMCwgImV4cGVjdF9hY2si
OiB0cnVlLCAidXNlcnMiOiBbImFAZXhhbXBsZS5jb20iXSwgInV1aWQiOiAiNjhmZDhlYWUtYWNh
YS00ODJiLTg3OTAtZjI0NGViMzljNWY1IiwgImV4cG9ydF9rZXkiOiAiZm5fY3ZlIn1dLCAiYWN0
aW9ucyI6IFt7ImlkIjogNzAsICJuYW1lIjogIkV4YW1wbGU6IENWRSBCcm93c2UiLCAidHlwZSI6
IDEsICJvYmplY3RfdHlwZSI6ICJpbmNpZGVudCIsICJjb25kaXRpb25zIjogW10sICJhdXRvbWF0
aW9ucyI6IFtdLCAibWVzc2FnZV9kZXN0aW5hdGlvbnMiOiBbXSwgIndvcmtmbG93cyI6IFsiZXhh
bXBsZV9jdmVfYnJvd3NlIl0sICJ2aWV3X2l0ZW1zIjogW3sic3RlcF9sYWJlbCI6IG51bGwsICJz
aG93X2lmIjogbnVsbCwgImVsZW1lbnQiOiAiZmllbGRfdXVpZCIsICJmaWVsZF90eXBlIjogImFj
dGlvbmludm9jYXRpb24iLCAiY29udGVudCI6ICJiMzUyZDY1ZS1kOGY5LTQ1Y2EtODFhZS01ZmJh
OWMyZGNjMGYiLCAic2hvd19saW5rX2hlYWRlciI6IGZhbHNlfV0sICJ0aW1lb3V0X3NlY29uZHMi
OiA4NjQwMCwgInV1aWQiOiAiZjg2ZjlkZDctZDc3Yi00YmUwLTgzNmQtYmI4OTg1Njc2NWU2Iiwg
ImV4cG9ydF9rZXkiOiAiRXhhbXBsZTogQ1ZFIEJyb3dzZSIsICJsb2dpY190eXBlIjogImFsbCJ9
LCB7ImlkIjogNzIsICJuYW1lIjogIkV4YW1wbGU6IENWRSBTZWFyY2giLCAidHlwZSI6IDEsICJv
YmplY3RfdHlwZSI6ICJpbmNpZGVudCIsICJjb25kaXRpb25zIjogW10sICJhdXRvbWF0aW9ucyI6
IFtdLCAibWVzc2FnZV9kZXN0aW5hdGlvbnMiOiBbXSwgIndvcmtmbG93cyI6IFsiZXhhbXBsZV9j
dmVfc2VhcmNoIl0sICJ2aWV3X2l0ZW1zIjogW3sic3RlcF9sYWJlbCI6IG51bGwsICJzaG93X2lm
IjogbnVsbCwgImVsZW1lbnQiOiAiZmllbGRfdXVpZCIsICJmaWVsZF90eXBlIjogImFjdGlvbmlu
dm9jYXRpb24iLCAiY29udGVudCI6ICI5ZWU3YzgzYy0wZjIxLTQ2MWQtODE0ZC05ZGY0YjljMGMx
NTYiLCAic2hvd19saW5rX2hlYWRlciI6IGZhbHNlfSwgeyJzdGVwX2xhYmVsIjogbnVsbCwgInNo
b3dfaWYiOiBudWxsLCAiZWxlbWVudCI6ICJoZWFkZXIiLCAiZmllbGRfdHlwZSI6IG51bGwsICJj
b250ZW50IjogIk9yIiwgInNob3dfbGlua19oZWFkZXIiOiBmYWxzZX0sIHsic3RlcF9sYWJlbCI6
IG51bGwsICJzaG93X2lmIjogbnVsbCwgImVsZW1lbnQiOiAiZmllbGRfdXVpZCIsICJmaWVsZF90
eXBlIjogImFjdGlvbmludm9jYXRpb24iLCAiY29udGVudCI6ICJiMzUyZDY1ZS1kOGY5LTQ1Y2Et
ODFhZS01ZmJhOWMyZGNjMGYiLCAic2hvd19saW5rX2hlYWRlciI6IGZhbHNlfSwgeyJzdGVwX2xh
YmVsIjogbnVsbCwgInNob3dfaWYiOiBudWxsLCAiZWxlbWVudCI6ICJmaWVsZF91dWlkIiwgImZp
ZWxkX3R5cGUiOiAiYWN0aW9uaW52b2NhdGlvbiIsICJjb250ZW50IjogImY1YzA2YmRmLWE0MWUt
NDFhOS1iOTJhLTBhYzA1NzI2NjQzMyIsICJzaG93X2xpbmtfaGVhZGVyIjogZmFsc2V9LCB7InN0
ZXBfbGFiZWwiOiBudWxsLCAic2hvd19pZiI6IG51bGwsICJlbGVtZW50IjogImZpZWxkX3V1aWQi
LCAiZmllbGRfdHlwZSI6ICJhY3Rpb25pbnZvY2F0aW9uIiwgImNvbnRlbnQiOiAiNWZhZTllNzkt
MTExOC00MTk1LWJlZGItZWNjNDBjNDJhNjc5IiwgInNob3dfbGlua19oZWFkZXIiOiBmYWxzZX0s
IHsic3RlcF9sYWJlbCI6IG51bGwsICJzaG93X2lmIjogbnVsbCwgImVsZW1lbnQiOiAiZmllbGRf
dXVpZCIsICJmaWVsZF90eXBlIjogImFjdGlvbmludm9jYXRpb24iLCAiY29udGVudCI6ICJhMTRm
ZTM2OC1jNjQxLTQ3ZjMtOTAxYi0yZWQ5MjgyMTMzYmIiLCAic2hvd19saW5rX2hlYWRlciI6IGZh
bHNlfSwgeyJzdGVwX2xhYmVsIjogbnVsbCwgInNob3dfaWYiOiBudWxsLCAiZWxlbWVudCI6ICJo
ZWFkZXIiLCAiZmllbGRfdHlwZSI6IG51bGwsICJjb250ZW50IjogIk9yIG5vIHNlYXJjaCBjcml0
ZXJpYSBmb3IgbGFzdCAzMCBDVkVzIiwgInNob3dfbGlua19oZWFkZXIiOiBmYWxzZX1dLCAidGlt
ZW91dF9zZWNvbmRzIjogODY0MDAsICJ1dWlkIjogImJhY2Y1NWE5LTQ5OTUtNGM4NS04MzRlLTVm
MTU0YzRjMGExNCIsICJleHBvcnRfa2V5IjogIkV4YW1wbGU6IENWRSBTZWFyY2giLCAibG9naWNf
dHlwZSI6ICJhbGwifV0sICJsYXlvdXRzIjogW10sICJub3RpZmljYXRpb25zIjogbnVsbCwgInRp
bWVmcmFtZXMiOiBudWxsLCAibG9jYWxlIjogbnVsbCwgImluZHVzdHJpZXMiOiBudWxsLCAicmVn
dWxhdG9ycyI6IG51bGwsICJnZW9zIjogbnVsbCwgInRhc2tfb3JkZXIiOiBbXSwgImFjdGlvbl9v
cmRlciI6IFtdLCAidHlwZXMiOiBbeyJpZCI6IG51bGwsICJ0eXBlX2lkIjogOCwgInR5cGVfbmFt
ZSI6ICJjdmVfZGF0YSIsICJmaWVsZHMiOiB7InN1bW1hcnkiOiB7ImlkIjogMjg4LCAibmFtZSI6
ICJzdW1tYXJ5IiwgInRleHQiOiAiU3VtbWFyeSIsICJwcmVmaXgiOiBudWxsLCAidHlwZV9pZCI6
IDEwMTIsICJ0b29sdGlwIjogIiIsICJwbGFjZWhvbGRlciI6ICIiLCAiaW5wdXRfdHlwZSI6ICJ0
ZXh0IiwgImhpZGVfbm90aWZpY2F0aW9uIjogZmFsc2UsICJjaG9zZW4iOiB0cnVlLCAiZGVmYXVs
dF9jaG9zZW5fYnlfc2VydmVyIjogZmFsc2UsICJibGFua19vcHRpb24iOiB0cnVlLCAiaW50ZXJu
YWwiOiBmYWxzZSwgInV1aWQiOiAiNDRjZWI1MGEtYjIyOC00ZTdiLWE5YjYtMzBhMTFjZjlmY2Nm
IiwgIm9wZXJhdGlvbnMiOiBbXSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidmFsdWVzIjogW10s
ICJyZWFkX29ubHkiOiBmYWxzZSwgImNoYW5nZWFibGUiOiB0cnVlLCAicmljaF90ZXh0IjogZmFs
c2UsICJleHBvcnRfa2V5IjogImN2ZV9kYXRhL3N1bW1hcnkiLCAib3JkZXIiOiAyLCAid2lkdGgi
OiAyMTAsICJ0ZW1wbGF0ZXMiOiBbXSwgImRlcHJlY2F0ZWQiOiBmYWxzZX0sICJyZWZlcmVuY2Vz
IjogeyJpZCI6IDI5MSwgIm5hbWUiOiAicmVmZXJlbmNlcyIsICJ0ZXh0IjogIlJlZmVyZW5jZXMi
LCAicHJlZml4IjogbnVsbCwgInR5cGVfaWQiOiAxMDEyLCAidG9vbHRpcCI6ICIiLCAicGxhY2Vo
b2xkZXIiOiAiIiwgImlucHV0X3R5cGUiOiAidGV4dGFyZWEiLCAiaGlkZV9ub3RpZmljYXRpb24i
OiBmYWxzZSwgImNob3NlbiI6IHRydWUsICJkZWZhdWx0X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxz
ZSwgImJsYW5rX29wdGlvbiI6IHRydWUsICJpbnRlcm5hbCI6IGZhbHNlLCAidXVpZCI6ICIzOGQx
MWJkNC1iNjJlLTQ0MTktOTc4ZS1jNzdmODE2MjIzZTIiLCAib3BlcmF0aW9ucyI6IFtdLCAib3Bl
cmF0aW9uX3Blcm1zIjoge30sICJ2YWx1ZXMiOiBbXSwgInJlYWRfb25seSI6IGZhbHNlLCAiY2hh
bmdlYWJsZSI6IHRydWUsICJyaWNoX3RleHQiOiB0cnVlLCAiZXhwb3J0X2tleSI6ICJjdmVfZGF0
YS9yZWZlcmVuY2VzIiwgIm9yZGVyIjogMywgIndpZHRoIjogMjYyLCAidGVtcGxhdGVzIjogW10s
ICJkZXByZWNhdGVkIjogZmFsc2V9LCAiY3ZlX2lkIjogeyJpZCI6IDI4NiwgIm5hbWUiOiAiY3Zl
X2lkIiwgInRleHQiOiAiQ1ZFIElEIiwgInByZWZpeCI6IG51bGwsICJ0eXBlX2lkIjogMTAxMiwg
InRvb2x0aXAiOiAiIiwgInBsYWNlaG9sZGVyIjogIiIsICJpbnB1dF90eXBlIjogInRleHQiLCAi
cmVxdWlyZWQiOiAiYWx3YXlzIiwgImhpZGVfbm90aWZpY2F0aW9uIjogZmFsc2UsICJjaG9zZW4i
OiB0cnVlLCAiZGVmYXVsdF9jaG9zZW5fYnlfc2VydmVyIjogZmFsc2UsICJibGFua19vcHRpb24i
OiB0cnVlLCAiaW50ZXJuYWwiOiBmYWxzZSwgInV1aWQiOiAiNjczOGI1MjMtZTc5MC00MjU5LWJi
YWYtNzEzMmU5YmU1MmRjIiwgIm9wZXJhdGlvbnMiOiBbXSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9
LCAidmFsdWVzIjogW10sICJyZWFkX29ubHkiOiBmYWxzZSwgImNoYW5nZWFibGUiOiB0cnVlLCAi
cmljaF90ZXh0IjogZmFsc2UsICJleHBvcnRfa2V5IjogImN2ZV9kYXRhL2N2ZV9pZCIsICJvcmRl
ciI6IDAsICJ3aWR0aCI6IDk3LCAidGVtcGxhdGVzIjogW10sICJkZXByZWNhdGVkIjogZmFsc2V9
LCAicHVibGlzaGVkX2RhdGUiOiB7ImlkIjogMjkwLCAibmFtZSI6ICJwdWJsaXNoZWRfZGF0ZSIs
ICJ0ZXh0IjogIlB1Ymxpc2hlZCBEYXRlIiwgInByZWZpeCI6IG51bGwsICJ0eXBlX2lkIjogMTAx
MiwgInRvb2x0aXAiOiAiIiwgInBsYWNlaG9sZGVyIjogIiIsICJpbnB1dF90eXBlIjogImRhdGVw
aWNrZXIiLCAicmVxdWlyZWQiOiAiYWx3YXlzIiwgImhpZGVfbm90aWZpY2F0aW9uIjogZmFsc2Us
ICJjaG9zZW4iOiB0cnVlLCAiZGVmYXVsdF9jaG9zZW5fYnlfc2VydmVyIjogZmFsc2UsICJibGFu
a19vcHRpb24iOiB0cnVlLCAiaW50ZXJuYWwiOiBmYWxzZSwgInV1aWQiOiAiMmMyOTc2ZDMtOGEy
OC00NDVhLWE5NzUtYzQyMGUyODA0ZmE0IiwgIm9wZXJhdGlvbnMiOiBbXSwgIm9wZXJhdGlvbl9w
ZXJtcyI6IHt9LCAidmFsdWVzIjogW10sICJyZWFkX29ubHkiOiBmYWxzZSwgImNoYW5nZWFibGUi
OiB0cnVlLCAicmljaF90ZXh0IjogZmFsc2UsICJleHBvcnRfa2V5IjogImN2ZV9kYXRhL3B1Ymxp
c2hlZF9kYXRlIiwgIm9yZGVyIjogMSwgIndpZHRoIjogNTgsICJ0ZW1wbGF0ZXMiOiBbXSwgImRl
cHJlY2F0ZWQiOiBmYWxzZX0sICJ2dWxuZXJhYmxlX2NvbmZpZ3VyYXRpb25fY3BlXzJfMiI6IHsi
aWQiOiAyODksICJuYW1lIjogInZ1bG5lcmFibGVfY29uZmlndXJhdGlvbl9jcGVfMl8yIiwgInRl
eHQiOiAiVnVsbmVyYWJsZSBDb25maWcgQ3BlIDIgMiIsICJwcmVmaXgiOiBudWxsLCAidHlwZV9p
ZCI6IDEwMTIsICJ0b29sdGlwIjogIiIsICJwbGFjZWhvbGRlciI6ICIiLCAiaW5wdXRfdHlwZSI6
ICJ0ZXh0YXJlYSIsICJoaWRlX25vdGlmaWNhdGlvbiI6IGZhbHNlLCAiY2hvc2VuIjogZmFsc2Us
ICJkZWZhdWx0X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwgImJsYW5rX29wdGlvbiI6IGZhbHNl
LCAiaW50ZXJuYWwiOiBmYWxzZSwgInV1aWQiOiAiYzNjZTgxZDItM2VjMS00ZWVhLTgwODMtNzgw
ZDc0NWYzMGQ2IiwgIm9wZXJhdGlvbnMiOiBbXSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidmFs
dWVzIjogW10sICJyZWFkX29ubHkiOiBmYWxzZSwgImNoYW5nZWFibGUiOiB0cnVlLCAicmljaF90
ZXh0IjogdHJ1ZSwgImV4cG9ydF9rZXkiOiAiY3ZlX2RhdGEvdnVsbmVyYWJsZV9jb25maWd1cmF0
aW9uX2NwZV8yXzIiLCAib3JkZXIiOiA1LCAid2lkdGgiOiAxNjIsICJ0ZW1wbGF0ZXMiOiBbXSwg
ImRlcHJlY2F0ZWQiOiBmYWxzZX0sICJ2dWxuZXJhYmlsaXR5X2NvbmZpZ3VyYXRpb24iOiB7Imlk
IjogMjg3LCAibmFtZSI6ICJ2dWxuZXJhYmlsaXR5X2NvbmZpZ3VyYXRpb24iLCAidGV4dCI6ICJW
dWxuZXJhYmlsaXR5IENvbmZpZyIsICJwcmVmaXgiOiBudWxsLCAidHlwZV9pZCI6IDEwMTIsICJ0
b29sdGlwIjogIiIsICJwbGFjZWhvbGRlciI6ICIiLCAiaW5wdXRfdHlwZSI6ICJ0ZXh0YXJlYSIs
ICJoaWRlX25vdGlmaWNhdGlvbiI6IGZhbHNlLCAiY2hvc2VuIjogZmFsc2UsICJkZWZhdWx0X2No
b3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwgImJsYW5rX29wdGlvbiI6IGZhbHNlLCAiaW50ZXJuYWwi
OiBmYWxzZSwgInV1aWQiOiAiNTQ1NDEwZmEtZWYxMy00ZjI2LTk3MGEtNTIyM2U4ZjViNDY3Iiwg
Im9wZXJhdGlvbnMiOiBbXSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidmFsdWVzIjogW10sICJy
ZWFkX29ubHkiOiBmYWxzZSwgImNoYW5nZWFibGUiOiB0cnVlLCAicmljaF90ZXh0IjogdHJ1ZSwg
ImV4cG9ydF9rZXkiOiAiY3ZlX2RhdGEvdnVsbmVyYWJpbGl0eV9jb25maWd1cmF0aW9uIiwgIm9y
ZGVyIjogNCwgIndpZHRoIjogMTY5LCAidGVtcGxhdGVzIjogW10sICJkZXByZWNhdGVkIjogZmFs
c2V9fSwgInByb3BlcnRpZXMiOiB7ImNhbl9jcmVhdGUiOiBmYWxzZSwgImNhbl9kZXN0cm95Ijog
ZmFsc2UsICJmb3Jfd2hvIjogW119LCAicGFyZW50X3R5cGVzIjogWyJpbmNpZGVudCJdLCAiZGlz
cGxheV9uYW1lIjogIkNWRSBTZWFyY2hlZCBEYXRhIiwgImZvcl9ub3RpZmljYXRpb25zIjogZmFs
c2UsICJmb3JfYWN0aW9ucyI6IGZhbHNlLCAiZm9yX2N1c3RvbV9maWVsZHMiOiBmYWxzZSwgImV4
cG9ydF9rZXkiOiAiY3ZlX2RhdGEiLCAidXVpZCI6ICI2Y2VmYzgyYy1iZWE5LTQxMDItOGFmYS0z
NWVmOTMwM2IyMDUiLCAiYWN0aW9ucyI6IFtdLCAic2NyaXB0cyI6IFtdfV0sICJzY3JpcHRzIjog
W10sICJpbmNpZGVudF9hcnRpZmFjdF90eXBlcyI6IFtdLCAid29ya2Zsb3dzIjogW3sid29ya2Zs
b3dfaWQiOiA1MywgIm5hbWUiOiAiRXhhbXBsZTogQ1ZFIFNlYXJjaCIsICJwcm9ncmFtbWF0aWNf
bmFtZSI6ICJleGFtcGxlX2N2ZV9zZWFyY2giLCAib2JqZWN0X3R5cGUiOiAiaW5jaWRlbnQiLCAi
ZGVzY3JpcHRpb24iOiAiU2VhcmNoIGZvciBDb21tb24gVnVsbmVyYWJpbGl0eSBhbmQgRXhwb3N1
cmVzIChDVkUpIGJhc2VkIG9uIENWRS1JRCBvciB2ZW5kb3IgYW5kIHByb2R1Y3Rpb24gaW5mb3Jt
YXRpb24iLCAiY3JlYXRvcl9pZCI6ICJhQGV4YW1wbGUuY29tIiwgImxhc3RfbW9kaWZpZWRfYnki
OiAiYUBleGFtcGxlLmNvbSIsICJsYXN0X21vZGlmaWVkX3RpbWUiOiAxNTUzNjI0NzIyNDUxLCAi
ZXhwb3J0X2tleSI6ICJleGFtcGxlX2N2ZV9zZWFyY2giLCAidXVpZCI6ICI3YTQyMDdjYi1iODFh
LTQ1NjgtYmJjMC05Yzk5OWUwMTQwMjIiLCAiY29udGVudCI6IHsid29ya2Zsb3dfaWQiOiAiZXhh
bXBsZV9jdmVfc2VhcmNoIiwgInhtbCI6ICI8P3htbCB2ZXJzaW9uPVwiMS4wXCIgZW5jb2Rpbmc9
XCJVVEYtOFwiPz48ZGVmaW5pdGlvbnMgeG1sbnM9XCJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9C
UE1OLzIwMTAwNTI0L01PREVMXCIgeG1sbnM6YnBtbmRpPVwiaHR0cDovL3d3dy5vbWcub3JnL3Nw
ZWMvQlBNTi8yMDEwMDUyNC9ESVwiIHhtbG5zOm9tZ2RjPVwiaHR0cDovL3d3dy5vbWcub3JnL3Nw
ZWMvREQvMjAxMDA1MjQvRENcIiB4bWxuczpvbWdkaT1cImh0dHA6Ly93d3cub21nLm9yZy9zcGVj
L0RELzIwMTAwNTI0L0RJXCIgeG1sbnM6cmVzaWxpZW50PVwiaHR0cDovL3Jlc2lsaWVudC5pYm0u
Y29tL2JwbW5cIiB4bWxuczp4c2Q9XCJodHRwOi8vd3d3LnczLm9yZy8yMDAxL1hNTFNjaGVtYVwi
IHhtbG5zOnhzaT1cImh0dHA6Ly93d3cudzMub3JnLzIwMDEvWE1MU2NoZW1hLWluc3RhbmNlXCIg
dGFyZ2V0TmFtZXNwYWNlPVwiaHR0cDovL3d3dy5jYW11bmRhLm9yZy90ZXN0XCI+PHByb2Nlc3Mg
aWQ9XCJleGFtcGxlX2N2ZV9zZWFyY2hcIiBpc0V4ZWN1dGFibGU9XCJ0cnVlXCIgbmFtZT1cIkV4
YW1wbGU6IENWRSBTZWFyY2hcIj48ZG9jdW1lbnRhdGlvbj5TZWFyY2ggZm9yIENvbW1vbiBWdWxu
ZXJhYmlsaXR5IGFuZCBFeHBvc3VyZXMgKENWRSkgYmFzZWQgb24gQ1ZFLUlEIG9yIHZlbmRvciBh
bmQgcHJvZHVjdGlvbiBpbmZvcm1hdGlvbjwvZG9jdW1lbnRhdGlvbj48c3RhcnRFdmVudCBpZD1c
IlN0YXJ0RXZlbnRfMTU1YXN4bVwiPjxvdXRnb2luZz5TZXF1ZW5jZUZsb3dfMGx1NGJ1ODwvb3V0
Z29pbmc+PC9zdGFydEV2ZW50PjxzZXJ2aWNlVGFzayBpZD1cIlNlcnZpY2VUYXNrXzEzODQycHVc
IiBuYW1lPVwiQ1ZFIFNlYXJjaFwiIHJlc2lsaWVudDp0eXBlPVwiZnVuY3Rpb25cIj48ZXh0ZW5z
aW9uRWxlbWVudHM+PHJlc2lsaWVudDpmdW5jdGlvbiB1dWlkPVwiNDE3NmY2N2MtYjdhOC00OTU3
LWJlYjMtOGEyNzYzYjUzMTI3XCI+e1wiaW5wdXRzXCI6e30sXCJwb3N0X3Byb2Nlc3Npbmdfc2Ny
aXB0XCI6XCIjZ2xvYmFsc1xcbkVOVFJZX1RPX0RBVEFUQUJMRV9NQVAgPSB7XFxuICAgICBcXFwi
Y3ZlXFxcIjogXFxcImN2ZV9pZFxcXCIsXFxuICAgICBcXFwicHViZHRlXFxcIjogXFxcInB1Ymxp
c2hlZF9kYXRlXFxcIixcXG4gICAgIFxcXCJzdW1cXFwiOiBcXFwic3VtbWFyeVxcXCIsXFxuICAg
ICBcXFwicmVmXFxcIjogXFxcInJlZmVyZW5jZXNcXFwiLFxcbiAgICAgXFxcInZjXFxcIjogXFxc
InZ1bG5lcmFiaWxpdHlfY29uZmlndXJhdGlvblxcXCIsXFxuICAgICBcXFwidmMyXFxcIjogXFxc
InZ1bG5lcmFibGVfY29uZmlndXJhdGlvbl9jcGVfMl8yXFxcIlxcbn1cXG5cXG5hcGlfY2FsbF90
eXBlID0gcmVzdWx0c1snYXBpX2NhbGwnXVxcbm91dHB1dF9kYXRhID0gcmVzdWx0c1snY29udGVu
dCddXFxuYXBpX2NhbGxfdHlwZV90ZXh0ID0gXFxcIiZsdDtwJmd0OyZsdDtiJmd0O2FwaSBjYWxs
IHR5cGUgOiZsdDsvYiZndDsge30mbHQ7L3AmZ3Q7XFxcIlxcbmJyb3dzZV9yaWNoX3RleHQgPSBc
XFwiJmx0O3AmZ3Q7Jmx0O2ImZ3Q7e30mYW1wO2Vuc3A6JmFtcDtlbnNwJmx0Oy9iJmd0O3t9JmFt
cDtlbnNwJmFtcDtlbnNwJmx0Oy9wJmd0O1xcXCJcXG5yaWNoX3RleHRfdG1wID0gXFxcIlxcXCJc
XG5cXG4jQWRkaW5nIGRhdGEgdG8gdGFibGVcXG5yZWZfbGlua190ZXh0ID0gXFxcIlxcXCJcXG5p
ZiBvdXRwdXRfZGF0YTpcXG4gICAgIGZvciBkaWN0X2VsZW1lbnQgaW4gb3V0cHV0X2RhdGE6XFxu
ICAgICAgICAgIHJpY2hfdGV4dF90bXAgPSBcXFwiXFxcIlxcbiAgICAgICAgICB0YWJsZV9yb3df
b2JqZWN0ID0gaW5jaWRlbnQuYWRkUm93KFxcXCJjdmVfZGF0YVxcXCIpXFxuICAgICAgICAgIGZv
ciBrZXlfZGF0YSx2YWx1ZV9kYXRhIGluIGRpY3RfZWxlbWVudC5pdGVtcygpOlxcbiAgICAgICAg
ICAgICAgIGlmIGtleV9kYXRhID09ICdQdWJsaXNoZWQnOlxcbiAgICAgICAgICAgICAgICAgICAg
dGFibGVfcm93X29iamVjdFtFTlRSWV9UT19EQVRBVEFCTEVfTUFQW1xcXCJwdWJkdGVcXFwiXV0g
PSBpbnQodmFsdWVfZGF0YSlcXG4gICAgICAgICAgICAgICBlbGlmIGtleV9kYXRhID09ICdpZCc6
XFxuICAgICAgICAgICAgICAgICAgICB0YWJsZV9yb3dfb2JqZWN0W0VOVFJZX1RPX0RBVEFUQUJM
RV9NQVBbXFxcImN2ZVxcXCJdXSA9IHZhbHVlX2RhdGFcXG4gICAgICAgICAgICAgICBlbGlmIGtl
eV9kYXRhID09ICdzdW1tYXJ5JzpcXG4gICAgICAgICAgICAgICAgICAgIHRhYmxlX3Jvd19vYmpl
Y3RbRU5UUllfVE9fREFUQVRBQkxFX01BUFtcXFwic3VtXFxcIl1dID0gdmFsdWVfZGF0YVxcbiAg
ICAgICAgICAgICAgIGVsaWYga2V5X2RhdGEgPT0gJ3JlZmVyZW5jZXMnOlxcbiAgICAgICAgICAg
ICAgICAgICAgZm9yIGxpbmtfdXJsIGluIHZhbHVlX2RhdGE6XFxuICAgICAgICAgICAgICAgICAg
ICAgICAgIHJlZl9saW5rX3RleHQgKz0gJyZsdDtwJmd0OyZsdDthIGhyZWY9XFxcInswfVxcXCIm
Z3Q7ezB9Jmx0Oy9hJmd0OyZsdDsvcCZndDsnLmZvcm1hdChsaW5rX3VybClcXG4gICAgICAgICAg
ICAgICAgICAgIHRhYmxlX3Jvd19vYmplY3RbRU5UUllfVE9fREFUQVRBQkxFX01BUFtcXFwicmVm
XFxcIl1dID0gcmVmX2xpbmtfdGV4dFxcbiAgICAgICAgICAgICAgIGVsaWYga2V5X2RhdGEgPT0g
J3Z1bG5lcmFibGVfY29uZmlndXJhdGlvbic6XFxuICAgICAgICAgICAgICAgICAgICBpZiB2YWx1
ZV9kYXRhOlxcbiAgICAgICAgICAgICAgICAgICAgICAgICBmb3IgdmNfY29sbGVjdGlvbiBpbiB2
YWx1ZV9kYXRhOlxcbiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGlmIGlzaW5zdGFuY2Uo
dmNfY29sbGVjdGlvbixkaWN0KTpcXG4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGZvciBrZXlfZGF0YSx2YWx1ZV9kYXRhIGluIHZjX2NvbGxlY3Rpb24uaXRlbXMoKTpcXG4gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdGV4dCA9IGJyb3dzZV9yaWNoX3Rl
eHQuZm9ybWF0KGtleV9kYXRhLHZhbHVlX2RhdGEpICAgICBcXG4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgcmljaF90ZXh0X3RtcCArPSB0ZXh0XFxuICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgZWxzZTpcXG4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHJpY2hfdGV4dF90bXAgKz0gXFxcIiZsdDtwJmd0O3t9Jmx0Oy9wJmd0O1xcXCIuZm9ybWF0
KHZjX2NvbGxlY3Rpb24pXFxuICAgICAgICAgICAgICAgICAgICBlbHNlOlxcbiAgICAgICAgICAg
ICAgICAgICAgICAgICByaWNoX3RleHRfdG1wID0gXFxcIk5vIERhdGFcXFwiXFxuICAgICAgICAg
ICAgICAgICAgICB0YWJsZV9yb3dfb2JqZWN0W0VOVFJZX1RPX0RBVEFUQUJMRV9NQVBbXFxcInZj
XFxcIl1dID0gcmljaF90ZXh0X3RtcFxcbiAgICAgICAgICAgICAgIGVsaWYga2V5X2RhdGEgPT0g
J3Z1bG5lcmFibGVfY29uZmlndXJhdGlvbl9jcGVfMl8yJzpcXG4gICAgICAgICAgICAgICAgICAg
IHJpY2hfdGV4dF90bXBfMiA9ICcnXFxuICAgICAgICAgICAgICAgICAgICBpZiB2YWx1ZV9kYXRh
OlxcbiAgICAgICAgICAgICAgICAgICAgICAgICBmb3IgdmNfY29sbGVjdGlvbiBpbiB2YWx1ZV9k
YXRhOlxcbiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHJpY2hfdGV4dF90bXBfMiArPSBc
XFwiJmx0O3AmZ3Q7e30mbHQ7L3AmZ3Q7XFxcIi5mb3JtYXQodmNfY29sbGVjdGlvbilcXG4gICAg
ICAgICAgICAgICAgICAgIGVsc2U6XFxuICAgICAgICAgICAgICAgICAgICAgICAgIHJpY2hfdGV4
dF90bXBfMiA9IFxcXCJObyBEYXRhXFxcIlxcbiAgICAgICAgICAgICAgICAgICAgdGFibGVfcm93
X29iamVjdFtFTlRSWV9UT19EQVRBVEFCTEVfTUFQW1xcXCJ2YzJcXFwiXV0gPSByaWNoX3RleHRf
dG1wXzJcXG5lbHNlOlxcbiAgICAgaW5jaWRlbnQuYWRkTm90ZSh1XFxcIk5vIGRhdGEgcmV0dXJu
ZWQgZnJvbSBDVkUgU2VhcmNoXFxcXG5cXFxcbkNWRS1JRDoge31cXFxcblZlbmRvcjoge31cXFxc
blByb2R1Y3Q6IHt9XFxcIi5mb3JtYXQocnVsZS5wcm9wZXJ0aWVzLmN2ZV9pZCwgcnVsZS5wcm9w
ZXJ0aWVzLmN2ZV92ZW5kb3IsIHJ1bGUucHJvcGVydGllcy5jdmVfcHJvZHVjdCkpXFxuXCIsXCJw
cmVfcHJvY2Vzc2luZ19zY3JpcHRcIjpcImlucHV0cy5jdmVfaWQgPSBydWxlLnByb3BlcnRpZXMu
Y3ZlX2lkXFxuaW5wdXRzLmN2ZV92ZW5kb3IgPSBydWxlLnByb3BlcnRpZXMuY3ZlX3ZlbmRvclxc
bmlucHV0cy5jdmVfcHJvZHVjdCA9IHJ1bGUucHJvcGVydGllcy5jdmVfcHJvZHVjdFxcbmlucHV0
cy5jdmVfcHVibGlzaGVkX2RhdGVfZnJvbSA9IHJ1bGUucHJvcGVydGllcy5jdmVfcHVibGlzaGVk
X2RhdGVfZnJvbVxcbmlucHV0cy5jdmVfcHVibGlzaGVkX2RhdGVfdG8gPSBydWxlLnByb3BlcnRp
ZXMuY3ZlX3B1Ymxpc2hlZF9kYXRlX3RvXFxuXCJ9PC9yZXNpbGllbnQ6ZnVuY3Rpb24+PC9leHRl
bnNpb25FbGVtZW50cz48aW5jb21pbmc+U2VxdWVuY2VGbG93XzBsdTRidTg8L2luY29taW5nPjxv
dXRnb2luZz5TZXF1ZW5jZUZsb3dfMGE1MWIxYzwvb3V0Z29pbmc+PC9zZXJ2aWNlVGFzaz48c2Vx
dWVuY2VGbG93IGlkPVwiU2VxdWVuY2VGbG93XzBsdTRidThcIiBzb3VyY2VSZWY9XCJTdGFydEV2
ZW50XzE1NWFzeG1cIiB0YXJnZXRSZWY9XCJTZXJ2aWNlVGFza18xMzg0MnB1XCIvPjxlbmRFdmVu
dCBpZD1cIkVuZEV2ZW50XzExam1jMXdcIj48aW5jb21pbmc+U2VxdWVuY2VGbG93XzBhNTFiMWM8
L2luY29taW5nPjwvZW5kRXZlbnQ+PHNlcXVlbmNlRmxvdyBpZD1cIlNlcXVlbmNlRmxvd18wYTUx
YjFjXCIgc291cmNlUmVmPVwiU2VydmljZVRhc2tfMTM4NDJwdVwiIHRhcmdldFJlZj1cIkVuZEV2
ZW50XzExam1jMXdcIi8+PHRleHRBbm5vdGF0aW9uIGlkPVwiVGV4dEFubm90YXRpb25fMWt4eGl5
dFwiPjx0ZXh0PlN0YXJ0IHlvdXIgd29ya2Zsb3cgaGVyZTwvdGV4dD48L3RleHRBbm5vdGF0aW9u
Pjxhc3NvY2lhdGlvbiBpZD1cIkFzc29jaWF0aW9uXzFzZXVqNDhcIiBzb3VyY2VSZWY9XCJTdGFy
dEV2ZW50XzE1NWFzeG1cIiB0YXJnZXRSZWY9XCJUZXh0QW5ub3RhdGlvbl8xa3h4aXl0XCIvPjx0
ZXh0QW5ub3RhdGlvbiBpZD1cIlRleHRBbm5vdGF0aW9uXzFhMXhxNTBcIj48dGV4dD5SZXN1bHRz
IHJldHVybmVkIGluIHRoZSBDVkUgZGF0YXRhYmxlPC90ZXh0PjwvdGV4dEFubm90YXRpb24+PGFz
c29jaWF0aW9uIGlkPVwiQXNzb2NpYXRpb25fMGZ4d2h3Z1wiIHNvdXJjZVJlZj1cIlNlcnZpY2VU
YXNrXzEzODQycHVcIiB0YXJnZXRSZWY9XCJUZXh0QW5ub3RhdGlvbl8xYTF4cTUwXCIvPjwvcHJv
Y2Vzcz48YnBtbmRpOkJQTU5EaWFncmFtIGlkPVwiQlBNTkRpYWdyYW1fMVwiPjxicG1uZGk6QlBN
TlBsYW5lIGJwbW5FbGVtZW50PVwidW5kZWZpbmVkXCIgaWQ9XCJCUE1OUGxhbmVfMVwiPjxicG1u
ZGk6QlBNTlNoYXBlIGJwbW5FbGVtZW50PVwiU3RhcnRFdmVudF8xNTVhc3htXCIgaWQ9XCJTdGFy
dEV2ZW50XzE1NWFzeG1fZGlcIj48b21nZGM6Qm91bmRzIGhlaWdodD1cIjM2XCIgd2lkdGg9XCIz
NlwiIHg9XCIxNjJcIiB5PVwiMTg4XCIvPjxicG1uZGk6QlBNTkxhYmVsPjxvbWdkYzpCb3VuZHMg
aGVpZ2h0PVwiMFwiIHdpZHRoPVwiOTBcIiB4PVwiMTU3XCIgeT1cIjIyM1wiLz48L2JwbW5kaTpC
UE1OTGFiZWw+PC9icG1uZGk6QlBNTlNoYXBlPjxicG1uZGk6QlBNTlNoYXBlIGJwbW5FbGVtZW50
PVwiVGV4dEFubm90YXRpb25fMWt4eGl5dFwiIGlkPVwiVGV4dEFubm90YXRpb25fMWt4eGl5dF9k
aVwiPjxvbWdkYzpCb3VuZHMgaGVpZ2h0PVwiMzBcIiB3aWR0aD1cIjEwMFwiIHg9XCI5OVwiIHk9
XCIyNTRcIi8+PC9icG1uZGk6QlBNTlNoYXBlPjxicG1uZGk6QlBNTkVkZ2UgYnBtbkVsZW1lbnQ9
XCJBc3NvY2lhdGlvbl8xc2V1ajQ4XCIgaWQ9XCJBc3NvY2lhdGlvbl8xc2V1ajQ4X2RpXCI+PG9t
Z2RpOndheXBvaW50IHg9XCIxNjlcIiB4c2k6dHlwZT1cIm9tZ2RjOlBvaW50XCIgeT1cIjIyMFwi
Lz48b21nZGk6d2F5cG9pbnQgeD1cIjE1M1wiIHhzaTp0eXBlPVwib21nZGM6UG9pbnRcIiB5PVwi
MjU0XCIvPjwvYnBtbmRpOkJQTU5FZGdlPjxicG1uZGk6QlBNTlNoYXBlIGJwbW5FbGVtZW50PVwi
U2VydmljZVRhc2tfMTM4NDJwdVwiIGlkPVwiU2VydmljZVRhc2tfMTM4NDJwdV9kaVwiPjxvbWdk
YzpCb3VuZHMgaGVpZ2h0PVwiODBcIiB3aWR0aD1cIjEwMFwiIHg9XCIyODdcIiB5PVwiMTY2XCIv
PjwvYnBtbmRpOkJQTU5TaGFwZT48YnBtbmRpOkJQTU5FZGdlIGJwbW5FbGVtZW50PVwiU2VxdWVu
Y2VGbG93XzBsdTRidThcIiBpZD1cIlNlcXVlbmNlRmxvd18wbHU0YnU4X2RpXCI+PG9tZ2RpOndh
eXBvaW50IHg9XCIxOThcIiB4c2k6dHlwZT1cIm9tZ2RjOlBvaW50XCIgeT1cIjIwNlwiLz48b21n
ZGk6d2F5cG9pbnQgeD1cIjI4N1wiIHhzaTp0eXBlPVwib21nZGM6UG9pbnRcIiB5PVwiMjA2XCIv
PjxicG1uZGk6QlBNTkxhYmVsPjxvbWdkYzpCb3VuZHMgaGVpZ2h0PVwiMTNcIiB3aWR0aD1cIjBc
IiB4PVwiMjQyLjVcIiB5PVwiMTg0XCIvPjwvYnBtbmRpOkJQTU5MYWJlbD48L2JwbW5kaTpCUE1O
RWRnZT48YnBtbmRpOkJQTU5TaGFwZSBicG1uRWxlbWVudD1cIkVuZEV2ZW50XzExam1jMXdcIiBp
ZD1cIkVuZEV2ZW50XzExam1jMXdfZGlcIj48b21nZGM6Qm91bmRzIGhlaWdodD1cIjM2XCIgd2lk
dGg9XCIzNlwiIHg9XCI0NjIuNTMxODAzOTYyNDYwOVwiIHk9XCIxODhcIi8+PGJwbW5kaTpCUE1O
TGFiZWw+PG9tZ2RjOkJvdW5kcyBoZWlnaHQ9XCIxM1wiIHdpZHRoPVwiMFwiIHg9XCI0ODAuNTMx
ODAzOTYyNDYwOVwiIHk9XCIyMjdcIi8+PC9icG1uZGk6QlBNTkxhYmVsPjwvYnBtbmRpOkJQTU5T
aGFwZT48YnBtbmRpOkJQTU5FZGdlIGJwbW5FbGVtZW50PVwiU2VxdWVuY2VGbG93XzBhNTFiMWNc
IiBpZD1cIlNlcXVlbmNlRmxvd18wYTUxYjFjX2RpXCI+PG9tZ2RpOndheXBvaW50IHg9XCIzODdc
IiB4c2k6dHlwZT1cIm9tZ2RjOlBvaW50XCIgeT1cIjIwNlwiLz48b21nZGk6d2F5cG9pbnQgeD1c
IjQ2M1wiIHhzaTp0eXBlPVwib21nZGM6UG9pbnRcIiB5PVwiMjA2XCIvPjxicG1uZGk6QlBNTkxh
YmVsPjxvbWdkYzpCb3VuZHMgaGVpZ2h0PVwiMTNcIiB3aWR0aD1cIjBcIiB4PVwiNDI1XCIgeT1c
IjE4NFwiLz48L2JwbW5kaTpCUE1OTGFiZWw+PC9icG1uZGk6QlBNTkVkZ2U+PGJwbW5kaTpCUE1O
U2hhcGUgYnBtbkVsZW1lbnQ9XCJUZXh0QW5ub3RhdGlvbl8xYTF4cTUwXCIgaWQ9XCJUZXh0QW5u
b3RhdGlvbl8xYTF4cTUwX2RpXCI+PG9tZ2RjOkJvdW5kcyBoZWlnaHQ9XCI1NVwiIHdpZHRoPVwi
MjI4XCIgeD1cIjM3MlwiIHk9XCI2M1wiLz48L2JwbW5kaTpCUE1OU2hhcGU+PGJwbW5kaTpCUE1O
RWRnZSBicG1uRWxlbWVudD1cIkFzc29jaWF0aW9uXzBmeHdod2dcIiBpZD1cIkFzc29jaWF0aW9u
XzBmeHdod2dfZGlcIj48b21nZGk6d2F5cG9pbnQgeD1cIjM4MlwiIHhzaTp0eXBlPVwib21nZGM6
UG9pbnRcIiB5PVwiMTcxXCIvPjxvbWdkaTp3YXlwb2ludCB4PVwiNDUxXCIgeHNpOnR5cGU9XCJv
bWdkYzpQb2ludFwiIHk9XCIxMThcIi8+PC9icG1uZGk6QlBNTkVkZ2U+PC9icG1uZGk6QlBNTlBs
YW5lPjwvYnBtbmRpOkJQTU5EaWFncmFtPjwvZGVmaW5pdGlvbnM+IiwgInZlcnNpb24iOiAyfSwg
ImFjdGlvbnMiOiBbXX0sIHsid29ya2Zsb3dfaWQiOiA1MCwgIm5hbWUiOiAiRXhhbXBsZTogQ1ZF
IEJyb3dzZSIsICJwcm9ncmFtbWF0aWNfbmFtZSI6ICJleGFtcGxlX2N2ZV9icm93c2UiLCAib2Jq
ZWN0X3R5cGUiOiAiaW5jaWRlbnQiLCAiZGVzY3JpcHRpb24iOiAiQnJvd3NlIENvbW1vbiBWdWxu
ZXJhYmlsaXR5IEV4cG9zdXJlcyAoQ1ZFKSBkYXRhYmFzZSBmb3IgdmVuZG9yIGFuZCBwcm9kdWN0
IGluZm9ybWF0aW9uLiIsICJjcmVhdG9yX2lkIjogImFAZXhhbXBsZS5jb20iLCAibGFzdF9tb2Rp
ZmllZF9ieSI6ICJhQGV4YW1wbGUuY29tIiwgImxhc3RfbW9kaWZpZWRfdGltZSI6IDE1NTM2MjE1
Njc3MDUsICJleHBvcnRfa2V5IjogImV4YW1wbGVfY3ZlX2Jyb3dzZSIsICJ1dWlkIjogImY5ODk1
ZGRjLTVmMWEtNDkyNi05NjYzLTA5NjU3NWMwMThmYiIsICJjb250ZW50IjogeyJ3b3JrZmxvd19p
ZCI6ICJleGFtcGxlX2N2ZV9icm93c2UiLCAieG1sIjogIjw/eG1sIHZlcnNpb249XCIxLjBcIiBl
bmNvZGluZz1cIlVURi04XCI/PjxkZWZpbml0aW9ucyB4bWxucz1cImh0dHA6Ly93d3cub21nLm9y
Zy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUxcIiB4bWxuczpicG1uZGk9XCJodHRwOi8vd3d3Lm9t
Zy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L0RJXCIgeG1sbnM6b21nZGM9XCJodHRwOi8vd3d3Lm9t
Zy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQ1wiIHhtbG5zOm9tZ2RpPVwiaHR0cDovL3d3dy5vbWcu
b3JnL3NwZWMvREQvMjAxMDA1MjQvRElcIiB4bWxuczpyZXNpbGllbnQ9XCJodHRwOi8vcmVzaWxp
ZW50LmlibS5jb20vYnBtblwiIHhtbG5zOnhzZD1cImh0dHA6Ly93d3cudzMub3JnLzIwMDEvWE1M
U2NoZW1hXCIgeG1sbnM6eHNpPVwiaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEtaW5z
dGFuY2VcIiB0YXJnZXROYW1lc3BhY2U9XCJodHRwOi8vd3d3LmNhbXVuZGEub3JnL3Rlc3RcIj48
cHJvY2VzcyBpZD1cImV4YW1wbGVfY3ZlX2Jyb3dzZVwiIGlzRXhlY3V0YWJsZT1cInRydWVcIiBu
YW1lPVwiRXhhbXBsZTogQ1ZFIEJyb3dzZVwiPjxkb2N1bWVudGF0aW9uPkJyb3dzZSBDb21tb24g
VnVsbmVyYWJpbGl0eSBFeHBvc3VyZXMgKENWRSkgZGF0YWJhc2UgZm9yIHZlbmRvciBhbmQgcHJv
ZHVjdCBpbmZvcm1hdGlvbi48L2RvY3VtZW50YXRpb24+PHN0YXJ0RXZlbnQgaWQ9XCJTdGFydEV2
ZW50XzE1NWFzeG1cIj48b3V0Z29pbmc+U2VxdWVuY2VGbG93XzFxbXlmbmI8L291dGdvaW5nPjwv
c3RhcnRFdmVudD48c2VydmljZVRhc2sgaWQ9XCJTZXJ2aWNlVGFza18wcXp2eW9yXCIgbmFtZT1c
IkNWRSBCcm93c2VcIiByZXNpbGllbnQ6dHlwZT1cImZ1bmN0aW9uXCI+PGV4dGVuc2lvbkVsZW1l
bnRzPjxyZXNpbGllbnQ6ZnVuY3Rpb24gdXVpZD1cIjQ2ZmEwYTk5LWRjYmMtNDJiMi04ZWUxLWEx
ODg5MTc2N2JlOFwiPntcImlucHV0c1wiOnt9LFwicG9zdF9wcm9jZXNzaW5nX3NjcmlwdFwiOlwi
YXBpX2NhbGxfdHlwZSA9IHJlc3VsdHNbJ2FwaV9jYWxsJ11cXG5vdXRwdXRfZGF0YSA9IHJlc3Vs
dHNbJ2NvbnRlbnQnXVxcbmFwaV9jYWxsX3R5cGVfdGV4dCA9IFxcXCImbHQ7cCZndDsmbHQ7YiZn
dDthcGkgY2FsbCB0eXBlIDombHQ7L2ImZ3Q7IHt9Jmx0Oy9wJmd0OyZsdDtwJmd0OyZsdDtiJmd0
O3ZlbmRvciA6Jmx0Oy9iJmd0OyB7fSAmbHQ7L3AmZ3Q7XFxcIlxcbmJyb3dzZV9yaWNoX3RleHQg
PSBcXFwiJmx0O3AmZ3Q7Jmx0O2ImZ3Q7e30mYW1wO2Vuc3A6JmFtcDtlbnNwJmx0Oy9iJmd0O3t9
JmFtcDtlbnNwJmFtcDtlbnNwJmx0Oy9wJmd0O1xcXCJcXG5yaWNoX3RleHRfdG1wID0gXFxcIlxc
XCJcXG4jQWRkaW5nIEJyb3dzZSBkYXRhIGFuZCBEYXRhYmFzZSBpbmZvcm1hdGlvbiBOb3RlcyBT
ZWN0aW9uXFxuYXBpX2NhbGxfdHlwZV90ZXh0ID0gYXBpX2NhbGxfdHlwZV90ZXh0LmZvcm1hdChh
cGlfY2FsbF90eXBlLCBydWxlLnByb3BlcnRpZXMuY3ZlX3ZlbmRvcilcXG5icm93c2VfcmljaF90
ZXh0X2ZpbmFsID0gXFxcIlxcXCJcXG5cXG5pZiBvdXRwdXRfZGF0YTpcXG4gICAgIGZvciB4IGlu
IG91dHB1dF9kYXRhOlxcbiAgICAgICAgICBmb3Iga2V5X2RhdGEsdmFsdWVfZGF0YSBpbiB4Lml0
ZW1zKCk6XFxuICAgICAgICAgICAgICAgdGV4dCA9IGJyb3dzZV9yaWNoX3RleHQuZm9ybWF0KGtl
eV9kYXRhLHZhbHVlX2RhdGEpXFxuICAgICAgICAgICAgICAgYXBpX2NhbGxfdHlwZV90ZXh0ICs9
IHRleHRcXG4gICAgIGJyb3dzZV9yaWNoX3RleHRfZmluYWwgPSBoZWxwZXIuY3JlYXRlUmljaFRl
eHQoYXBpX2NhbGxfdHlwZV90ZXh0KVxcbmVsc2U6XFxuICAgICBicm93c2VfcmljaF90ZXh0X2Zp
bmFsID0gJ05vIHNlYXJjaGVkIGRhdGEgcmV0dXJuZWQnXFxuaW5jaWRlbnQuYWRkTm90ZShicm93
c2VfcmljaF90ZXh0X2ZpbmFsKVwiLFwicHJlX3Byb2Nlc3Npbmdfc2NyaXB0XCI6XCJpbnB1dHMu
Y3ZlX2Jyb3dzZV9jcml0ZXJpYSA9ICdicm93c2UnXFxuaW5wdXRzLmN2ZV92ZW5kb3IgPSBydWxl
LnByb3BlcnRpZXMuY3ZlX3ZlbmRvclwiLFwicmVzdWx0X25hbWVcIjpcIlwifTwvcmVzaWxpZW50
OmZ1bmN0aW9uPjwvZXh0ZW5zaW9uRWxlbWVudHM+PGluY29taW5nPlNlcXVlbmNlRmxvd18xcW15
Zm5iPC9pbmNvbWluZz48b3V0Z29pbmc+U2VxdWVuY2VGbG93XzBwM2w0Y3I8L291dGdvaW5nPjwv
c2VydmljZVRhc2s+PHNlcXVlbmNlRmxvdyBpZD1cIlNlcXVlbmNlRmxvd18xcW15Zm5iXCIgc291
cmNlUmVmPVwiU3RhcnRFdmVudF8xNTVhc3htXCIgdGFyZ2V0UmVmPVwiU2VydmljZVRhc2tfMHF6
dnlvclwiLz48ZW5kRXZlbnQgaWQ9XCJFbmRFdmVudF8xNXNyZ3VxXCI+PGluY29taW5nPlNlcXVl
bmNlRmxvd18wcDNsNGNyPC9pbmNvbWluZz48L2VuZEV2ZW50PjxzZXF1ZW5jZUZsb3cgaWQ9XCJT
ZXF1ZW5jZUZsb3dfMHAzbDRjclwiIHNvdXJjZVJlZj1cIlNlcnZpY2VUYXNrXzBxenZ5b3JcIiB0
YXJnZXRSZWY9XCJFbmRFdmVudF8xNXNyZ3VxXCIvPjx0ZXh0QW5ub3RhdGlvbiBpZD1cIlRleHRB
bm5vdGF0aW9uXzFreHhpeXRcIj48dGV4dD5TdGFydCB5b3VyIHdvcmtmbG93IGhlcmU8L3RleHQ+
PC90ZXh0QW5ub3RhdGlvbj48YXNzb2NpYXRpb24gaWQ9XCJBc3NvY2lhdGlvbl8xc2V1ajQ4XCIg
c291cmNlUmVmPVwiU3RhcnRFdmVudF8xNTVhc3htXCIgdGFyZ2V0UmVmPVwiVGV4dEFubm90YXRp
b25fMWt4eGl5dFwiLz48dGV4dEFubm90YXRpb24gaWQ9XCJUZXh0QW5ub3RhdGlvbl8xbWh1dGVy
XCI+PHRleHQ+UmV0dXJucyB2dWxuZXJhYmlsaXR5IGRhdGEgYXMgYW4gaW5jaWRlbnQgbm90ZTwv
dGV4dD48L3RleHRBbm5vdGF0aW9uPjxhc3NvY2lhdGlvbiBpZD1cIkFzc29jaWF0aW9uXzFkOG5z
NDlcIiBzb3VyY2VSZWY9XCJTZXJ2aWNlVGFza18wcXp2eW9yXCIgdGFyZ2V0UmVmPVwiVGV4dEFu
bm90YXRpb25fMW1odXRlclwiLz48dGV4dEFubm90YXRpb24gaWQ9XCJUZXh0QW5ub3RhdGlvbl8x
OTd3cmQ5XCI+PHRleHQ+PCFbQ0RBVEFbQnJvd3NlIGZvciBhbGwgdmVuZG9yICYgcHJvZHVjdCBp
bmZvcm1hdGlvbiBvciBmb3Igc3BlY2lmaWMgdmVuZG9yJ3MgcHJvZHVjdCBkZXRhaWxzXV0+PC90
ZXh0PjwvdGV4dEFubm90YXRpb24+PGFzc29jaWF0aW9uIGlkPVwiQXNzb2NpYXRpb25fMHp5MzBi
c1wiIHNvdXJjZVJlZj1cIlNlcnZpY2VUYXNrXzBxenZ5b3JcIiB0YXJnZXRSZWY9XCJUZXh0QW5u
b3RhdGlvbl8xOTd3cmQ5XCIvPjwvcHJvY2Vzcz48YnBtbmRpOkJQTU5EaWFncmFtIGlkPVwiQlBN
TkRpYWdyYW1fMVwiPjxicG1uZGk6QlBNTlBsYW5lIGJwbW5FbGVtZW50PVwidW5kZWZpbmVkXCIg
aWQ9XCJCUE1OUGxhbmVfMVwiPjxicG1uZGk6QlBNTlNoYXBlIGJwbW5FbGVtZW50PVwiU3RhcnRF
dmVudF8xNTVhc3htXCIgaWQ9XCJTdGFydEV2ZW50XzE1NWFzeG1fZGlcIj48b21nZGM6Qm91bmRz
IGhlaWdodD1cIjM2XCIgd2lkdGg9XCIzNlwiIHg9XCIxNzdcIiB5PVwiMjEwXCIvPjxicG1uZGk6
QlBNTkxhYmVsPjxvbWdkYzpCb3VuZHMgaGVpZ2h0PVwiMFwiIHdpZHRoPVwiOTBcIiB4PVwiMTcy
XCIgeT1cIjI0NVwiLz48L2JwbW5kaTpCUE1OTGFiZWw+PC9icG1uZGk6QlBNTlNoYXBlPjxicG1u
ZGk6QlBNTlNoYXBlIGJwbW5FbGVtZW50PVwiVGV4dEFubm90YXRpb25fMWt4eGl5dFwiIGlkPVwi
VGV4dEFubm90YXRpb25fMWt4eGl5dF9kaVwiPjxvbWdkYzpCb3VuZHMgaGVpZ2h0PVwiMzBcIiB3
aWR0aD1cIjEwMFwiIHg9XCI5OVwiIHk9XCIyNTRcIi8+PC9icG1uZGk6QlBNTlNoYXBlPjxicG1u
ZGk6QlBNTkVkZ2UgYnBtbkVsZW1lbnQ9XCJBc3NvY2lhdGlvbl8xc2V1ajQ4XCIgaWQ9XCJBc3Nv
Y2lhdGlvbl8xc2V1ajQ4X2RpXCI+PG9tZ2RpOndheXBvaW50IHg9XCIxODBcIiB4c2k6dHlwZT1c
Im9tZ2RjOlBvaW50XCIgeT1cIjIzOFwiLz48b21nZGk6d2F5cG9pbnQgeD1cIjE2MlwiIHhzaTp0
eXBlPVwib21nZGM6UG9pbnRcIiB5PVwiMjU0XCIvPjwvYnBtbmRpOkJQTU5FZGdlPjxicG1uZGk6
QlBNTlNoYXBlIGJwbW5FbGVtZW50PVwiU2VydmljZVRhc2tfMHF6dnlvclwiIGlkPVwiU2Vydmlj
ZVRhc2tfMHF6dnlvcl9kaVwiPjxvbWdkYzpCb3VuZHMgaGVpZ2h0PVwiODBcIiB3aWR0aD1cIjEw
MFwiIHg9XCIyNjRcIiB5PVwiMTg4XCIvPjwvYnBtbmRpOkJQTU5TaGFwZT48YnBtbmRpOkJQTU5F
ZGdlIGJwbW5FbGVtZW50PVwiU2VxdWVuY2VGbG93XzFxbXlmbmJcIiBpZD1cIlNlcXVlbmNlRmxv
d18xcW15Zm5iX2RpXCI+PG9tZ2RpOndheXBvaW50IHg9XCIyMTNcIiB4c2k6dHlwZT1cIm9tZ2Rj
OlBvaW50XCIgeT1cIjIyOFwiLz48b21nZGk6d2F5cG9pbnQgeD1cIjIzOVwiIHhzaTp0eXBlPVwi
b21nZGM6UG9pbnRcIiB5PVwiMjI4XCIvPjxvbWdkaTp3YXlwb2ludCB4PVwiMjM5XCIgeHNpOnR5
cGU9XCJvbWdkYzpQb2ludFwiIHk9XCIyMjhcIi8+PG9tZ2RpOndheXBvaW50IHg9XCIyNjRcIiB4
c2k6dHlwZT1cIm9tZ2RjOlBvaW50XCIgeT1cIjIyOFwiLz48YnBtbmRpOkJQTU5MYWJlbD48b21n
ZGM6Qm91bmRzIGhlaWdodD1cIjEyXCIgd2lkdGg9XCI5MFwiIHg9XCIyMDlcIiB5PVwiMjIyXCIv
PjwvYnBtbmRpOkJQTU5MYWJlbD48L2JwbW5kaTpCUE1ORWRnZT48YnBtbmRpOkJQTU5TaGFwZSBi
cG1uRWxlbWVudD1cIkVuZEV2ZW50XzE1c3JndXFcIiBpZD1cIkVuZEV2ZW50XzE1c3JndXFfZGlc
Ij48b21nZGM6Qm91bmRzIGhlaWdodD1cIjM2XCIgd2lkdGg9XCIzNlwiIHg9XCI0NTlcIiB5PVwi
MjEwXCIvPjxicG1uZGk6QlBNTkxhYmVsPjxvbWdkYzpCb3VuZHMgaGVpZ2h0PVwiMTJcIiB3aWR0
aD1cIjkwXCIgeD1cIjQzMlwiIHk9XCIyNTBcIi8+PC9icG1uZGk6QlBNTkxhYmVsPjwvYnBtbmRp
OkJQTU5TaGFwZT48YnBtbmRpOkJQTU5FZGdlIGJwbW5FbGVtZW50PVwiU2VxdWVuY2VGbG93XzBw
M2w0Y3JcIiBpZD1cIlNlcXVlbmNlRmxvd18wcDNsNGNyX2RpXCI+PG9tZ2RpOndheXBvaW50IHg9
XCIzNjRcIiB4c2k6dHlwZT1cIm9tZ2RjOlBvaW50XCIgeT1cIjIyOFwiLz48b21nZGk6d2F5cG9p
bnQgeD1cIjQxN1wiIHhzaTp0eXBlPVwib21nZGM6UG9pbnRcIiB5PVwiMjI4XCIvPjxvbWdkaTp3
YXlwb2ludCB4PVwiNDE3XCIgeHNpOnR5cGU9XCJvbWdkYzpQb2ludFwiIHk9XCIyMjhcIi8+PG9t
Z2RpOndheXBvaW50IHg9XCI0NThcIiB4c2k6dHlwZT1cIm9tZ2RjOlBvaW50XCIgeT1cIjIyOFwi
Lz48YnBtbmRpOkJQTU5MYWJlbD48b21nZGM6Qm91bmRzIGhlaWdodD1cIjEyXCIgd2lkdGg9XCI5
MFwiIHg9XCIzODdcIiB5PVwiMjIyXCIvPjwvYnBtbmRpOkJQTU5MYWJlbD48L2JwbW5kaTpCUE1O
RWRnZT48YnBtbmRpOkJQTU5TaGFwZSBicG1uRWxlbWVudD1cIlRleHRBbm5vdGF0aW9uXzFtaHV0
ZXJcIiBpZD1cIlRleHRBbm5vdGF0aW9uXzFtaHV0ZXJfZGlcIj48b21nZGM6Qm91bmRzIGhlaWdo
dD1cIjU4XCIgd2lkdGg9XCIyNjFcIiB4PVwiMzcxXCIgeT1cIjY1XCIvPjwvYnBtbmRpOkJQTU5T
aGFwZT48YnBtbmRpOkJQTU5FZGdlIGJwbW5FbGVtZW50PVwiQXNzb2NpYXRpb25fMWQ4bnM0OVwi
IGlkPVwiQXNzb2NpYXRpb25fMWQ4bnM0OV9kaVwiPjxvbWdkaTp3YXlwb2ludCB4PVwiMzYwXCIg
eHNpOnR5cGU9XCJvbWdkYzpQb2ludFwiIHk9XCIxOTRcIi8+PG9tZ2RpOndheXBvaW50IHg9XCI0
NTlcIiB4c2k6dHlwZT1cIm9tZ2RjOlBvaW50XCIgeT1cIjEyM1wiLz48L2JwbW5kaTpCUE1ORWRn
ZT48YnBtbmRpOkJQTU5TaGFwZSBicG1uRWxlbWVudD1cIlRleHRBbm5vdGF0aW9uXzE5N3dyZDlc
IiBpZD1cIlRleHRBbm5vdGF0aW9uXzE5N3dyZDlfZGlcIj48b21nZGM6Qm91bmRzIGhlaWdodD1c
IjQ4XCIgd2lkdGg9XCIxOTVcIiB4PVwiNTNcIiB5PVwiNjlcIi8+PC9icG1uZGk6QlBNTlNoYXBl
PjxicG1uZGk6QlBNTkVkZ2UgYnBtbkVsZW1lbnQ9XCJBc3NvY2lhdGlvbl8wenkzMGJzXCIgaWQ9
XCJBc3NvY2lhdGlvbl8wenkzMGJzX2RpXCI+PG9tZ2RpOndheXBvaW50IHg9XCIyNzBcIiB4c2k6
dHlwZT1cIm9tZ2RjOlBvaW50XCIgeT1cIjE5MlwiLz48b21nZGk6d2F5cG9pbnQgeD1cIjE3OVwi
IHhzaTp0eXBlPVwib21nZGM6UG9pbnRcIiB5PVwiMTE3XCIvPjwvYnBtbmRpOkJQTU5FZGdlPjwv
YnBtbmRpOkJQTU5QbGFuZT48L2JwbW5kaTpCUE1ORGlhZ3JhbT48L2RlZmluaXRpb25zPiIsICJ2
ZXJzaW9uIjogOH0sICJhY3Rpb25zIjogW119XSwgInJvbGVzIjogW10sICJ3b3Jrc3BhY2VzIjog
W10sICJmdW5jdGlvbnMiOiBbeyJpZCI6IDM4LCAibmFtZSI6ICJmdW5jdGlvbl9jdmVfYnJvd3Nl
IiwgImRpc3BsYXlfbmFtZSI6ICJDVkUgQnJvd3NlIiwgImRlc2NyaXB0aW9uIjogeyJmb3JtYXQi
OiAidGV4dCIsICJjb250ZW50IjogIkEgRnVuY3Rpb24gdG8gYnJvd3NlIENvbW1vbiBWdWxuZXJh
YmlsaXR5IGFuZCBFeHBvc3VyZXMgKENWRSkgdmVuZG9ycyBhbmQgcHJvZHVjdCBpbmZvcm1hdGlv
biBmcm9tIGh0dHBzOi8vY3ZlLmNpcmNsLmx1LiJ9LCAiZGVzdGluYXRpb25faGFuZGxlIjogImZu
X2N2ZSIsICJleHBvcnRfa2V5IjogImZ1bmN0aW9uX2N2ZV9icm93c2UiLCAidXVpZCI6ICI0NmZh
MGE5OS1kY2JjLTQyYjItOGVlMS1hMTg4OTE3NjdiZTgiLCAidmVyc2lvbiI6IDMsICJjcmVhdG9y
IjogeyJpZCI6IDMsICJ0eXBlIjogInVzZXIiLCAibmFtZSI6ICJhQGV4YW1wbGUuY29tIiwgImRp
c3BsYXlfbmFtZSI6ICJhYmxlIGJhY2tlciJ9LCAibGFzdF9tb2RpZmllZF9ieSI6IHsiaWQiOiAz
LCAidHlwZSI6ICJ1c2VyIiwgIm5hbWUiOiAiYUBleGFtcGxlLmNvbSIsICJkaXNwbGF5X25hbWUi
OiAiYWJsZSBiYWNrZXIifSwgImxhc3RfbW9kaWZpZWRfdGltZSI6IDE1NTM2MDU4MjUzNjUsICJ2
aWV3X2l0ZW1zIjogW3sic3RlcF9sYWJlbCI6IG51bGwsICJzaG93X2lmIjogbnVsbCwgImVsZW1l
bnQiOiAiZmllbGRfdXVpZCIsICJmaWVsZF90eXBlIjogIl9fZnVuY3Rpb24iLCAiY29udGVudCI6
ICI5NzcwMmQyMS0yOGFlLTRmNmUtODYzNi1lNzI4NTVhMzE4YmEiLCAic2hvd19saW5rX2hlYWRl
ciI6IGZhbHNlfSwgeyJzdGVwX2xhYmVsIjogbnVsbCwgInNob3dfaWYiOiBudWxsLCAiZWxlbWVu
dCI6ICJmaWVsZF91dWlkIiwgImZpZWxkX3R5cGUiOiAiX19mdW5jdGlvbiIsICJjb250ZW50Ijog
ImYxYTVjNzkwLWZhYTQtNGFhOC1iYjQyLTRmMTc0OTFmZjg0NSIsICJzaG93X2xpbmtfaGVhZGVy
IjogZmFsc2V9XSwgIndvcmtmbG93cyI6IFt7IndvcmtmbG93X2lkIjogNTAsICJuYW1lIjogIkV4
YW1wbGU6IENWRSBCcm93c2UiLCAicHJvZ3JhbW1hdGljX25hbWUiOiAiZXhhbXBsZV9jdmVfYnJv
d3NlIiwgIm9iamVjdF90eXBlIjogImluY2lkZW50IiwgImRlc2NyaXB0aW9uIjogbnVsbCwgInV1
aWQiOiBudWxsLCAiYWN0aW9ucyI6IFtdfV19LCB7ImlkIjogMzksICJuYW1lIjogImZ1bmN0aW9u
X2N2ZV9zZWFyY2giLCAiZGlzcGxheV9uYW1lIjogIkNWRSBTZWFyY2giLCAiZGVzY3JpcHRpb24i
OiB7ImZvcm1hdCI6ICJ0ZXh0IiwgImNvbnRlbnQiOiAiQSBGdW5jdGlvbiB0byBTZWFyY2ggIENv
bW1vbiBWdWxuZXJhYmlsaXR5IEV4cG9zdXJlcyBEYXRhIGZyb20gaHR0cHM6Ly9jdmUuY2lyY2wu
bHUgRGF0YSBCYXNlLiJ9LCAiZGVzdGluYXRpb25faGFuZGxlIjogImZuX2N2ZSIsICJleHBvcnRf
a2V5IjogImZ1bmN0aW9uX2N2ZV9zZWFyY2giLCAidXVpZCI6ICI0MTc2ZjY3Yy1iN2E4LTQ5NTct
YmViMy04YTI3NjNiNTMxMjciLCAidmVyc2lvbiI6IDMsICJjcmVhdG9yIjogeyJpZCI6IDMsICJ0
eXBlIjogInVzZXIiLCAibmFtZSI6ICJhQGV4YW1wbGUuY29tIiwgImRpc3BsYXlfbmFtZSI6ICJh
YmxlIGJhY2tlciJ9LCAibGFzdF9tb2RpZmllZF9ieSI6IHsiaWQiOiAzLCAidHlwZSI6ICJ1c2Vy
IiwgIm5hbWUiOiAiYUBleGFtcGxlLmNvbSIsICJkaXNwbGF5X25hbWUiOiAiYWJsZSBiYWNrZXIi
fSwgImxhc3RfbW9kaWZpZWRfdGltZSI6IDE1NTM2MjQ3MzM2MDgsICJ2aWV3X2l0ZW1zIjogW3si
c3RlcF9sYWJlbCI6IG51bGwsICJzaG93X2lmIjogbnVsbCwgImVsZW1lbnQiOiAiZmllbGRfdXVp
ZCIsICJmaWVsZF90eXBlIjogIl9fZnVuY3Rpb24iLCAiY29udGVudCI6ICJiNmY2MjcxMC1mMTcy
LTQ2ODctYjY5Yi00ODA2MjIwYTJkNTIiLCAic2hvd19saW5rX2hlYWRlciI6IGZhbHNlfSwgeyJz
dGVwX2xhYmVsIjogbnVsbCwgInNob3dfaWYiOiBudWxsLCAiZWxlbWVudCI6ICJmaWVsZF91dWlk
IiwgImZpZWxkX3R5cGUiOiAiX19mdW5jdGlvbiIsICJjb250ZW50IjogImYxYTVjNzkwLWZhYTQt
NGFhOC1iYjQyLTRmMTc0OTFmZjg0NSIsICJzaG93X2xpbmtfaGVhZGVyIjogZmFsc2V9LCB7InN0
ZXBfbGFiZWwiOiBudWxsLCAic2hvd19pZiI6IG51bGwsICJlbGVtZW50IjogImZpZWxkX3V1aWQi
LCAiZmllbGRfdHlwZSI6ICJfX2Z1bmN0aW9uIiwgImNvbnRlbnQiOiAiNGFlNTkyMTMtYmUzNy00
ZTA3LWE3ZWEtMGM0MDkzYWRlNzMwIiwgInNob3dfbGlua19oZWFkZXIiOiBmYWxzZX0sIHsic3Rl
cF9sYWJlbCI6IG51bGwsICJzaG93X2lmIjogbnVsbCwgImVsZW1lbnQiOiAiZmllbGRfdXVpZCIs
ICJmaWVsZF90eXBlIjogIl9fZnVuY3Rpb24iLCAiY29udGVudCI6ICIxMTI1NDhlOC1hNGY4LTQ5
MDYtYmQyZi1hNzlkYzZkNDZkYTUiLCAic2hvd19saW5rX2hlYWRlciI6IGZhbHNlfSwgeyJzdGVw
X2xhYmVsIjogbnVsbCwgInNob3dfaWYiOiBudWxsLCAiZWxlbWVudCI6ICJmaWVsZF91dWlkIiwg
ImZpZWxkX3R5cGUiOiAiX19mdW5jdGlvbiIsICJjb250ZW50IjogIjU5ZDhkZWIwLTc3MjEtNDI4
YS04MmRkLWZhNDg2MmNlYzFhOSIsICJzaG93X2xpbmtfaGVhZGVyIjogZmFsc2V9XSwgIndvcmtm
bG93cyI6IFt7IndvcmtmbG93X2lkIjogNTMsICJuYW1lIjogIkV4YW1wbGU6IENWRSBTZWFyY2gi
LCAicHJvZ3JhbW1hdGljX25hbWUiOiAiZXhhbXBsZV9jdmVfc2VhcmNoIiwgIm9iamVjdF90eXBl
IjogImluY2lkZW50IiwgImRlc2NyaXB0aW9uIjogbnVsbCwgInV1aWQiOiBudWxsLCAiYWN0aW9u
cyI6IFtdfV19XX0=
"""
)
| 72.469256 | 161 | 0.966976 | 834 | 44,786 | 51.828537 | 0.794964 | 0.00111 | 0.002961 | 0.001573 | 0.009023 | 0.009023 | 0.007542 | 0.007542 | 0.005969 | 0.005969 | 0 | 0.117909 | 0.027017 | 44,786 | 617 | 162 | 72.58671 | 0.874036 | 0.018086 | 0 | 0.03125 | 1 | 0 | 0.981512 | 0.960634 | 0 | 1 | 0 | 0 | 0 | 1 | 0.003472 | false | 0 | 0.005208 | 0 | 0.010417 | 0.001736 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
53db1201318e7162d67128062de85623785bb8d4 | 34 | py | Python | erroremailer/__init__.py | stvgz/error-emailer | 9c1707614adc5cb0b22286eae8842ecdf0af0463 | [
"MIT"
] | null | null | null | erroremailer/__init__.py | stvgz/error-emailer | 9c1707614adc5cb0b22286eae8842ecdf0af0463 | [
"MIT"
] | null | null | null | erroremailer/__init__.py | stvgz/error-emailer | 9c1707614adc5cb0b22286eae8842ecdf0af0463 | [
"MIT"
] | null | null | null | from .send_email import EmailError | 34 | 34 | 0.882353 | 5 | 34 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9906a3354c93cf6a3dbc268171dbbc1e170aa61f | 95 | py | Python | source/docs_instance.py | martmists/server | 16e1cb6a11dc0db0a4b97a90f59fe027876dc873 | [
"BSD-3-Clause"
] | null | null | null | source/docs_instance.py | martmists/server | 16e1cb6a11dc0db0a4b97a90f59fe027876dc873 | [
"BSD-3-Clause"
] | null | null | null | source/docs_instance.py | martmists/server | 16e1cb6a11dc0db0a4b97a90f59fe027876dc873 | [
"BSD-3-Clause"
] | 1 | 2018-10-25T16:57:09.000Z | 2018-10-25T16:57:09.000Z | from framework.objects import sayonika_instance
app = sayonika_instance
app.gather("routes")
| 15.833333 | 47 | 0.821053 | 12 | 95 | 6.333333 | 0.75 | 0.421053 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 95 | 5 | 48 | 19 | 0.894118 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
54cb8e564e6767ecedb98bc836ed8d16a7a526f9 | 107 | py | Python | ghhu/telefone.py | renzon/hotel-urbano | 7942476186113338ef0552061cfe87ca6496beff | [
"MIT"
] | 2 | 2016-06-18T13:19:48.000Z | 2017-06-29T22:39:12.000Z | ghhu/telefone.py | renzon/hotel-urbano | 7942476186113338ef0552061cfe87ca6496beff | [
"MIT"
] | 1 | 2016-06-24T02:41:28.000Z | 2016-06-24T12:44:26.000Z | ghhu/telefone.py | renzon/hotel-urbano | 7942476186113338ef0552061cfe87ca6496beff | [
"MIT"
] | 4 | 2016-06-18T13:07:59.000Z | 2017-06-29T22:39:26.000Z | class Telefone:
def telefonar(self, numero):
return 'Ligando de verdade para {}'.format(numero) | 35.666667 | 58 | 0.682243 | 13 | 107 | 5.615385 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205607 | 107 | 3 | 58 | 35.666667 | 0.858824 | 0 | 0 | 0 | 0 | 0 | 0.240741 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
54e8098fdd6285b662e64eaa6286d80b2eea5755 | 229 | py | Python | EEG_Lightning/dassl/engine/__init__.py | mcd4874/NeurIPS_competition | 4df1f222929e9824a55c9c4ae6634743391b0fe9 | [
"MIT"
] | 23 | 2021-10-14T02:31:06.000Z | 2022-01-25T16:26:44.000Z | EEG_Lightning/dassl/engine/__init__.py | mcd4874/NeurIPS_competition | 4df1f222929e9824a55c9c4ae6634743391b0fe9 | [
"MIT"
] | null | null | null | EEG_Lightning/dassl/engine/__init__.py | mcd4874/NeurIPS_competition | 4df1f222929e9824a55c9c4ae6634743391b0fe9 | [
"MIT"
] | 1 | 2022-03-05T06:54:11.000Z | 2022-03-05T06:54:11.000Z | from .build import TRAINER_REGISTRY, build_trainer # isort:skip
from .trainer import TrainerBase,TrainerMultiAdaptation
from .base import *
from .da import *
# from .da import custom_mcd
# from .dg import *
# from .ssl import *
| 25.444444 | 63 | 0.768559 | 31 | 229 | 5.580645 | 0.483871 | 0.17341 | 0.138728 | 0.208092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152838 | 229 | 8 | 64 | 28.625 | 0.891753 | 0.323144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
54eafe26acc4a4b8ba6321acc5e0a1be5c178c7b | 42 | py | Python | components/micropython/modules/sha2017lite/shell.py | badgeteam/Firmware | 6192b2902c70beb7a298a256d9087274d045fbc0 | [
"Apache-2.0"
] | 12 | 2017-06-10T14:51:20.000Z | 2019-04-22T18:21:59.000Z | components/micropython/modules/sha2017lite/shell.py | badgeteam/Firmware | 6192b2902c70beb7a298a256d9087274d045fbc0 | [
"Apache-2.0"
] | 89 | 2017-06-09T20:57:27.000Z | 2018-03-06T19:54:04.000Z | components/micropython/modules/sha2017lite/shell.py | badgeteam/Firmware | 6192b2902c70beb7a298a256d9087274d045fbc0 | [
"Apache-2.0"
] | 22 | 2017-05-31T20:56:16.000Z | 2020-01-21T11:45:49.000Z | import appglue
appglue.start_app("shell")
| 14 | 26 | 0.809524 | 6 | 42 | 5.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 42 | 2 | 27 | 21 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
54f204e33f971706f4a3fb0d402c731b572e4f48 | 1,385 | py | Python | resources/dot_PyCharm/system/python_stubs/-762174762/PySide/QtGui/QContextMenuEvent.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | 1 | 2020-04-20T02:27:20.000Z | 2020-04-20T02:27:20.000Z | resources/dot_PyCharm/system/python_stubs/cache/8cdc475d469a13122bc4bc6c3ac1c215d93d5f120f5cc1ef33a8f3088ee54d8e/PySide/QtGui/QContextMenuEvent.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | resources/dot_PyCharm/system/python_stubs/cache/8cdc475d469a13122bc4bc6c3ac1c215d93d5f120f5cc1ef33a8f3088ee54d8e/PySide/QtGui/QContextMenuEvent.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | # encoding: utf-8
# module PySide.QtGui
# from C:\Python27\lib\site-packages\PySide\QtGui.pyd
# by generator 1.147
# no doc
# imports
import PySide.QtCore as __PySide_QtCore
import Shiboken as __Shiboken
from QInputEvent import QInputEvent
class QContextMenuEvent(QInputEvent):
# no doc
def globalPos(self, *args, **kwargs): # real signature unknown
pass
def globalX(self, *args, **kwargs): # real signature unknown
pass
def globalY(self, *args, **kwargs): # real signature unknown
pass
def pos(self, *args, **kwargs): # real signature unknown
pass
def reason(self, *args, **kwargs): # real signature unknown
pass
def x(self, *args, **kwargs): # real signature unknown
pass
def y(self, *args, **kwargs): # real signature unknown
pass
def __init__(self, *args, **kwargs): # real signature unknown
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
Keyboard = PySide.QtGui.QContextMenuEvent.Reason.Keyboard
Mouse = PySide.QtGui.QContextMenuEvent.Reason.Mouse
Other = PySide.QtGui.QContextMenuEvent.Reason.Other
Reason = None # (!) real value is "<type 'PySide.QtGui.QContextMenuEvent.Reason'>"
| 27.156863 | 86 | 0.666426 | 170 | 1,385 | 5.282353 | 0.370588 | 0.13029 | 0.200445 | 0.160356 | 0.361915 | 0.361915 | 0.361915 | 0.319599 | 0 | 0 | 0 | 0.00653 | 0.225993 | 1,385 | 50 | 87 | 27.7 | 0.831157 | 0.368953 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.111111 | 0 | 0.62963 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
4adedd9b04be39a05ed2554239055703dd9b9bb4 | 120,113 | py | Python | openmdao/core/tests/test_check_derivs.py | jbergeson/OpenMDAO | 50df8f5888c6499a0ef66b836ee63b122a50aaae | [
"Apache-2.0"
] | null | null | null | openmdao/core/tests/test_check_derivs.py | jbergeson/OpenMDAO | 50df8f5888c6499a0ef66b836ee63b122a50aaae | [
"Apache-2.0"
] | null | null | null | openmdao/core/tests/test_check_derivs.py | jbergeson/OpenMDAO | 50df8f5888c6499a0ef66b836ee63b122a50aaae | [
"Apache-2.0"
] | null | null | null | """ Testing for Problem.check_partials and check_totals."""
from io import StringIO
import unittest
import numpy as np
import openmdao.api as om
from openmdao.core.tests.test_impl_comp import QuadraticLinearize, QuadraticJacVec
from openmdao.core.tests.test_matmat import MultiJacVec
from openmdao.test_suite.components.impl_comp_array import TestImplCompArrayMatVec
from openmdao.test_suite.components.paraboloid import Paraboloid
from openmdao.test_suite.components.paraboloid_mat_vec import ParaboloidMatVec
from openmdao.test_suite.components.sellar import SellarDerivatives, SellarDis1withDerivatives, \
SellarDis2withDerivatives
from openmdao.test_suite.components.simple_comps import DoubleArrayComp
from openmdao.test_suite.components.array_comp import ArrayComp
from openmdao.test_suite.groups.parallel_groups import FanInSubbedIDVC, Diamond
from openmdao.utils.assert_utils import assert_near_equal, assert_warning, assert_check_partials, \
assert_no_warning
from openmdao.utils.mpi import MPI
try:
from openmdao.vectors.petsc_vector import PETScVector
except ImportError:
PETScVector = None
class ParaboloidTricky(om.ExplicitComponent):
"""
Evaluates the equation f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3.
"""
def setup(self):
self.add_input('x', val=0.0)
self.add_input('y', val=0.0)
self.add_output('f_xy', val=0.0)
self.scale = 1e-7
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
"""
f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3
Optimal solution (minimum): x = 6.6667; y = -7.3333
"""
sc = self.scale
x = inputs['x']*sc
y = inputs['y']*sc
outputs['f_xy'] = (x-3.0)**2 + x*y + (y+4.0)**2 - 3.0
def compute_partials(self, inputs, partials):
"""
Jacobian for our paraboloid.
"""
sc = self.scale
x = inputs['x']
y = inputs['y']
partials['f_xy', 'x'] = 2.0*x*sc*sc - 6.0*sc + y*sc*sc
partials['f_xy', 'y'] = 2.0*y*sc*sc + 8.0*sc + x*sc*sc
class MyCompGoodPartials(om.ExplicitComponent):
def setup(self):
self.add_input('x1', 3.0)
self.add_input('x2', 5.0)
self.add_output('y', 5.5)
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['y'] = 3.0 * inputs['x1'] + 4.0 * inputs['x2']
def compute_partials(self, inputs, partials):
"""Correct derivative."""
J = partials
J['y', 'x1'] = np.array([3.0])
J['y', 'x2'] = np.array([4.0])
class MyCompBadPartials(om.ExplicitComponent):
def setup(self):
self.add_input('y1', 3.0)
self.add_input('y2', 5.0)
self.add_output('z', 5.5)
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['z'] = 3.0 * inputs['y1'] + 4.0 * inputs['y2']
def compute_partials(self, inputs, partials):
"""Intentionally incorrect derivative."""
J = partials
J['z', 'y1'] = np.array([33.0])
J['z', 'y2'] = np.array([40.0])
class MyComp(om.ExplicitComponent):
def setup(self):
self.add_input('x1', 3.0)
self.add_input('x2', 5.0)
self.add_output('y', 5.5)
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['y'] = 3.0*inputs['x1'] + 4.0*inputs['x2']
def compute_partials(self, inputs, partials):
"""Intentionally incorrect derivative."""
J = partials
J['y', 'x1'] = np.array([4.0])
J['y', 'x2'] = np.array([40])
class TestProblemCheckPartials(unittest.TestCase):
def test_incorrect_jacobian(self):
prob = om.Problem()
prob.model.add_subsystem('comp', MyComp())
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream)
lines = stream.getvalue().splitlines()
y_wrt_x1_line = lines.index(" comp: 'y' wrt 'x1'")
self.assertTrue(lines[y_wrt_x1_line+3].endswith('*'),
msg='Error flag expected in output but not displayed')
self.assertTrue(lines[y_wrt_x1_line+5].endswith('*'),
msg='Error flag expected in output but not displayed')
y_wrt_x2_line = lines.index(" comp: 'y' wrt 'x2'")
self.assertTrue(lines[y_wrt_x2_line+3].endswith('*'),
msg='Error flag not expected in output but displayed')
self.assertTrue(lines[y_wrt_x2_line+5].endswith('*'),
msg='Error flag not expected in output but displayed')
def test_component_only(self):
prob = om.Problem()
prob.model = MyComp()
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream)
lines = stream.getvalue().splitlines()
y_wrt_x1_line = lines.index(" : 'y' wrt 'x1'")
self.assertTrue(lines[y_wrt_x1_line+3].endswith('*'),
msg='Error flag expected in output but not displayed')
self.assertTrue(lines[y_wrt_x1_line+5].endswith('*'),
msg='Error flag expected in output but not displayed')
def test_component_only_suppress(self):
prob = om.Problem()
prob.model = MyComp()
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
data = prob.check_partials(out_stream=None)
subheads = data[''][('y', 'x1')]
self.assertTrue('J_fwd' in subheads)
self.assertTrue('rel error' in subheads)
self.assertTrue('abs error' in subheads)
self.assertTrue('magnitude' in subheads)
lines = stream.getvalue().splitlines()
self.assertEqual(len(lines), 0)
def test_component_has_no_outputs(self):
prob = om.Problem()
model = prob.model
model.add_subsystem("indep", om.IndepVarComp('x', 5.))
model.add_subsystem("comp1", Paraboloid())
comp2 = model.add_subsystem("comp2", om.ExplicitComponent())
comp2.add_input('x', val=0.)
model.connect('indep.x', ['comp1.x', 'comp2.x'])
prob.setup()
prob.run_model()
# warning about 'comp2'
msg = "No derivative data found for Component 'comp2'."
with assert_warning(UserWarning, msg):
data = prob.check_partials(out_stream=None)
# and no derivative data for 'comp2'
self.assertFalse('comp2' in data)
# but we still get good derivative data for 'comp1'
self.assertTrue('comp1' in data)
assert_near_equal(data['comp1'][('f_xy', 'x')]['J_fd'][0][0], 4., 1e-6)
assert_near_equal(data['comp1'][('f_xy', 'x')]['J_fwd'][0][0], 4., 1e-15)
def test_component_no_check_partials(self):
prob = om.Problem()
model = prob.model
model.add_subsystem("indep", om.IndepVarComp('x', 5.))
model.add_subsystem("comp1", Paraboloid())
comp2 = model.add_subsystem("comp2", Paraboloid())
model.connect('indep.x', ['comp1.x', 'comp2.x'])
prob.setup()
prob.run_model()
#
# disable partials on comp2
#
comp2._no_check_partials = True
data = prob.check_partials(out_stream=None)
# no derivative data for 'comp2'
self.assertFalse('comp2' in data)
# but we still get good derivative data for 'comp1'
self.assertTrue('comp1' in data)
assert_near_equal(data['comp1'][('f_xy', 'x')]['J_fd'][0][0], 4., 1e-6)
assert_near_equal(data['comp1'][('f_xy', 'x')]['J_fwd'][0][0], 4., 1e-15)
#
# re-enable partials on comp2
#
comp2._no_check_partials = False
data = prob.check_partials(out_stream=None)
# now we should have derivative data for 'comp2'
self.assertTrue('comp2' in data)
assert_near_equal(data['comp2'][('f_xy', 'x')]['J_fd'][0][0], 4., 1e-6)
assert_near_equal(data['comp2'][('f_xy', 'x')]['J_fwd'][0][0], 4., 1e-15)
# and still get good derivative data for 'comp1'
self.assertTrue('comp1' in data)
assert_near_equal(data['comp1'][('f_xy', 'x')]['J_fd'][0][0], 4., 1e-6)
assert_near_equal(data['comp1'][('f_xy', 'x')]['J_fwd'][0][0], 4., 1e-15)
def test_missing_entry(self):
class MyComp(om.ExplicitComponent):
def setup(self):
self.add_input('x1', 3.0)
self.add_input('x2', 5.0)
self.add_output('y', 5.5)
self.declare_partials(of='*', wrt='*')
self.lin_count = 0
def compute(self, inputs, outputs):
outputs['y'] = 3.0*inputs['x1'] + 4.0*inputs['x2']
def compute_partials(self, inputs, partials):
"""Intentionally left out derivative."""
J = partials
J['y', 'x1'] = np.array([3.0])
self.lin_count += 1
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x1', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('x2', 5.0))
prob.model.add_subsystem('comp', MyComp())
prob.model.connect('p1.x1', 'comp.x1')
prob.model.connect('p2.x2', 'comp.x2')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None)
self.assertEqual(prob.model.comp.lin_count, 1)
abs_error = data['comp']['y', 'x1']['abs error']
rel_error = data['comp']['y', 'x1']['rel error']
self.assertAlmostEqual(abs_error.forward, 0.)
self.assertAlmostEqual(rel_error.forward, 0.)
self.assertAlmostEqual(np.linalg.norm(data['comp']['y', 'x1']['J_fd'] - 3.), 0.,
delta=1e-6)
abs_error = data['comp']['y', 'x2']['abs error']
rel_error = data['comp']['y', 'x2']['rel error']
self.assertAlmostEqual(abs_error.forward, 4.)
self.assertAlmostEqual(rel_error.forward, 1.)
self.assertAlmostEqual(np.linalg.norm(data['comp']['y', 'x2']['J_fd'] - 4.), 0.,
delta=1e-6)
def test_nested_fd_units(self):
class UnitCompBase(om.ExplicitComponent):
def setup(self):
self.add_input('T', val=284., units="degR", desc="Temperature")
self.add_input('P', val=1., units='lbf/inch**2', desc="Pressure")
self.add_output('flow:T', val=284., units="degR", desc="Temperature")
self.add_output('flow:P', val=1., units='lbf/inch**2', desc="Pressure")
# Finite difference everything
self.declare_partials(of='*', wrt='*', method='fd')
def compute(self, inputs, outputs):
outputs['flow:T'] = inputs['T']
outputs['flow:P'] = inputs['P']
p = om.Problem()
model = p.model
indep = model.add_subsystem('indep', om.IndepVarComp(), promotes=['*'])
indep.add_output('T', val=100., units='degK')
indep.add_output('P', val=1., units='bar')
model.add_subsystem('units', UnitCompBase(), promotes=['*'])
p.setup()
data = p.check_partials(out_stream=None)
for comp_name, comp in data.items():
for partial_name, partial in comp.items():
forward = partial['J_fwd']
fd = partial['J_fd']
self.assertAlmostEqual(np.linalg.norm(forward - fd), 0., delta=1e-6)
def test_units(self):
class UnitCompBase(om.ExplicitComponent):
def setup(self):
self.add_input('T', val=284., units="degR", desc="Temperature")
self.add_input('P', val=1., units='lbf/inch**2', desc="Pressure")
self.add_output('flow:T', val=284., units="degR", desc="Temperature")
self.add_output('flow:P', val=1., units='lbf/inch**2', desc="Pressure")
self.run_count = 0
self.declare_partials(of='*', wrt='*')
def compute_partials(self, inputs, partials):
partials['flow:T', 'T'] = 1.
partials['flow:P', 'P'] = 1.
def compute(self, inputs, outputs):
outputs['flow:T'] = inputs['T']
outputs['flow:P'] = inputs['P']
self.run_count += 1
p = om.Problem()
model = p.model
indep = model.add_subsystem('indep', om.IndepVarComp(), promotes=['*'])
indep.add_output('T', val=100., units='degK')
indep.add_output('P', val=1., units='bar')
units = model.add_subsystem('units', UnitCompBase(), promotes=['*'])
model.nonlinear_solver = om.NonlinearRunOnce()
p.setup()
data = p.check_partials(out_stream=None)
for comp_name, comp in data.items():
for partial_name, partial in comp.items():
abs_error = partial['abs error']
self.assertAlmostEqual(abs_error.forward, 0.)
# Make sure we only FD this twice.
# The count is 5 because in check_partials, there are two calls to apply_nonlinear
# when compute the fwd and rev analytic derivatives, then one call to apply_nonlinear
# to compute the reference point for FD, then two additional calls for the two inputs.
self.assertEqual(units.run_count, 5)
def test_scalar_val(self):
class PassThrough(om.ExplicitComponent):
"""
Helper component that is needed when variables must be passed
directly from input to output
"""
def __init__(self, i_var, o_var, val, units=None):
super().__init__()
self.i_var = i_var
self.o_var = o_var
self.units = units
self.val = val
if isinstance(val, (float, int)) or np.isscalar(val):
size = 1
else:
size = np.prod(val.shape)
self.size = size
def setup(self):
if self.units is None:
self.add_input(self.i_var, self.val)
self.add_output(self.o_var, self.val)
else:
self.add_input(self.i_var, self.val, units=self.units)
self.add_output(self.o_var, self.val, units=self.units)
row_col = np.arange(self.size)
self.declare_partials(of=self.o_var, wrt=self.i_var,
val=1, rows=row_col, cols=row_col)
def compute(self, inputs, outputs):
outputs[self.o_var] = inputs[self.i_var]
def linearize(self, inputs, outputs, J):
pass
p = om.Problem()
indeps = p.model.add_subsystem('indeps', om.IndepVarComp(), promotes=['*'])
indeps.add_output('foo', val=np.ones(4))
indeps.add_output('foo2', val=np.ones(4))
p.model.add_subsystem('pt', PassThrough("foo", "bar", val=np.ones(4)), promotes=['*'])
p.model.add_subsystem('pt2', PassThrough("foo2", "bar2", val=np.ones(4)), promotes=['*'])
p.set_solver_print(level=0)
p.setup()
p.run_model()
data = p.check_partials(out_stream=None)
identity = np.eye(4)
assert_near_equal(data['pt'][('bar', 'foo')]['J_fwd'], identity, 1e-15)
assert_near_equal(data['pt'][('bar', 'foo')]['J_fd'], identity, 1e-9)
assert_near_equal(data['pt2'][('bar2', 'foo2')]['J_fwd'], identity, 1e-15)
assert_near_equal(data['pt2'][('bar2', 'foo2')]['J_fd'], identity, 1e-9)
def test_matrix_free_explicit(self):
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
prob.model.add_subsystem('comp', ParaboloidMatVec())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None)
for comp_name, comp in data.items():
for partial_name, partial in comp.items():
abs_error = partial['abs error']
rel_error = partial['rel error']
assert_near_equal(abs_error.forward, 0., 1e-5)
assert_near_equal(abs_error.reverse, 0., 1e-5)
assert_near_equal(abs_error.forward_reverse, 0., 1e-5)
assert_near_equal(rel_error.forward, 0., 1e-5)
assert_near_equal(rel_error.reverse, 0., 1e-5)
assert_near_equal(rel_error.forward_reverse, 0., 1e-5)
assert_near_equal(data['comp'][('f_xy', 'x')]['J_fwd'][0][0], 5.0, 1e-6)
assert_near_equal(data['comp'][('f_xy', 'x')]['J_rev'][0][0], 5.0, 1e-6)
assert_near_equal(data['comp'][('f_xy', 'y')]['J_fwd'][0][0], 21.0, 1e-6)
assert_near_equal(data['comp'][('f_xy', 'y')]['J_rev'][0][0], 21.0, 1e-6)
def test_matrix_free_implicit(self):
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('rhs', np.ones((2, ))))
prob.model.add_subsystem('comp', TestImplCompArrayMatVec())
prob.model.connect('p1.rhs', 'comp.rhs')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None)
for comp_name, comp in data.items():
for partial_name, partial in comp.items():
abs_error = partial['abs error']
rel_error = partial['rel error']
assert_near_equal(abs_error.forward, 0., 1e-5)
assert_near_equal(abs_error.reverse, 0., 1e-5)
assert_near_equal(abs_error.forward_reverse, 0., 1e-5)
assert_near_equal(rel_error.forward, 0., 1e-5)
assert_near_equal(rel_error.reverse, 0., 1e-5)
assert_near_equal(rel_error.forward_reverse, 0., 1e-5)
def test_implicit_undeclared(self):
# Test to see that check_partials works when state_wrt_input and state_wrt_state
# partials are missing.
class ImplComp4Test(om.ImplicitComponent):
def setup(self):
self.add_input('x', np.ones(2))
self.add_input('dummy', np.ones(2))
self.add_output('y', np.ones(2))
self.add_output('extra', np.ones(2))
self.mtx = np.array([
[3., 4.],
[2., 3.],
])
self.declare_partials(of='*', wrt='*')
def apply_nonlinear(self, inputs, outputs, residuals):
residuals['y'] = self.mtx.dot(outputs['y']) - inputs['x']
def linearize(self, inputs, outputs, partials):
partials['y', 'x'] = -np.eye(2)
partials['y', 'y'] = self.mtx
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', np.ones((2, ))))
prob.model.add_subsystem('p2', om.IndepVarComp('dummy', np.ones((2, ))))
prob.model.add_subsystem('comp', ImplComp4Test())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.dummy', 'comp.dummy')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None)
assert_near_equal(data['comp']['y', 'extra']['J_fwd'], np.zeros((2, 2)))
assert_near_equal(data['comp']['y', 'dummy']['J_fwd'], np.zeros((2, 2)))
def test_dependent_false_hide(self):
# Test that we omit derivs declared with dependent=False
class SimpleComp1(om.ExplicitComponent):
def setup(self):
self.add_input('z', shape=(2, 2))
self.add_input('x', shape=(2, 2))
self.add_output('g', shape=(2, 2))
self.declare_partials(of='g', wrt='x')
self.declare_partials(of='g', wrt='z', dependent=False)
def compute(self, inputs, outputs):
outputs['g'] = 3.0*inputs['x']
def compute_partials(self, inputs, partials):
partials['g', 'x'] = 3.
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('z', np.ones((2, 2))))
prob.model.add_subsystem('p2', om.IndepVarComp('x', np.ones((2, 2))))
prob.model.add_subsystem('comp', SimpleComp1())
prob.model.connect('p1.z', 'comp.z')
prob.model.connect('p2.x', 'comp.x')
prob.setup()
stream = StringIO()
data = prob.check_partials(out_stream=stream)
lines = stream.getvalue().splitlines()
self.assertTrue(" comp: 'g' wrt 'z'" not in lines)
self.assertTrue(('g', 'z') not in data['comp'])
self.assertTrue(" comp: 'g' wrt 'x'" in lines)
self.assertTrue(('g', 'x') in data['comp'])
def test_dependent_false_compact_print_never_hide(self):
# API Change: we no longer omit derivatives for compact_print, even when declared as not
# dependent.
class SimpleComp1(om.ExplicitComponent):
def setup(self):
self.add_input('z', shape=(2, 2))
self.add_input('x', shape=(2, 2))
self.add_output('g', shape=(2, 2))
self.declare_partials(of='g', wrt='x')
self.declare_partials(of='g', wrt='z', dependent=False)
def compute(self, inputs, outputs):
outputs['g'] = 3.0*inputs['x']
def compute_partials(self, inputs, partials):
partials['g', 'x'] = 3.
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('z', np.ones((2, 2))))
prob.model.add_subsystem('p2', om.IndepVarComp('x', np.ones((2, 2))))
prob.model.add_subsystem('comp', SimpleComp1())
prob.model.connect('p1.z', 'comp.z')
prob.model.connect('p2.x', 'comp.x')
prob.setup()
stream = StringIO()
data = prob.check_partials(out_stream=stream, compact_print=True)
txt = stream.getvalue()
self.assertTrue("'g' wrt 'z'" in txt)
self.assertTrue(('g', 'z') in data['comp'])
self.assertTrue("'g' wrt 'x'" in txt)
self.assertTrue(('g', 'x') in data['comp'])
def test_dependent_false_show(self):
# Test that we show derivs declared with dependent=False if the fd is not
# ~zero.
class SimpleComp2(om.ExplicitComponent):
def setup(self):
self.add_input('z', shape=(2, 2))
self.add_input('x', shape=(2, 2))
self.add_output('g', shape=(2, 2))
self.declare_partials(of='g', wrt='x')
self.declare_partials('g', 'z', dependent=False)
def compute(self, inputs, outputs):
outputs['g'] = 2.0*inputs['z'] + 3.0*inputs['x']
def compute_partials(self, inputs, partials):
partials['g', 'x'] = 3.
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('z', np.ones((2, 2))))
prob.model.add_subsystem('p2', om.IndepVarComp('x', np.ones((2, 2))))
prob.model.add_subsystem('comp', SimpleComp2())
prob.model.connect('p1.z', 'comp.z')
prob.model.connect('p2.x', 'comp.x')
prob.setup()
stream = StringIO()
data = prob.check_partials(out_stream=stream)
lines = stream.getvalue().splitlines()
self.assertTrue(" comp: 'g' wrt 'z'" in lines)
self.assertTrue(('g', 'z') in data['comp'])
self.assertTrue(" comp: 'g' wrt 'x'" in lines)
self.assertTrue(('g', 'x') in data['comp'])
def test_set_step_on_comp(self):
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
comp = prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
comp.set_check_partial_options(wrt='*', step=1e-2)
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None, compact_print=True)
# This will fail unless you set the check_step.
x_error = data['comp']['f_xy', 'x']['rel error']
self.assertLess(x_error.forward, 1e-5)
def test_set_step_global(self):
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None, step=1e-2)
# This will fail unless you set the global step.
x_error = data['comp']['f_xy', 'x']['rel error']
self.assertLess(x_error.forward, 1e-5)
def test_complex_step_not_allocated(self):
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
comp = prob.model.add_subsystem('comp', ParaboloidMatVec())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
comp.set_check_partial_options(wrt='*', method='cs')
prob.setup()
prob.run_model()
msg = "The following components requested complex step, but force_alloc_complex " + \
"has not been set to True, so finite difference was used: ['comp']\n" + \
"To enable complex step, specify 'force_alloc_complex=True' when calling " + \
"setup on the problem, e.g. 'problem.setup(force_alloc_complex=True)'"
with assert_warning(UserWarning, msg):
data = prob.check_partials(out_stream=None)
# Derivative still calculated, but with fd instead.
x_error = data['comp']['f_xy', 'x']['rel error']
self.assertLess(x_error.forward, 1e-5)
self.assertLess(x_error.reverse, 1e-5)
def test_set_method_on_comp(self):
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
comp = prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
comp.set_check_partial_options(wrt='*', method='cs')
prob.setup(check=False, force_alloc_complex=True)
prob.run_model()
data = prob.check_partials(out_stream=None, compact_print=True)
x_error = data['comp']['f_xy', 'x']['rel error']
self.assertLess(x_error.forward, 1e-5)
def test_set_method_global(self):
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
prob.setup(check=False, force_alloc_complex=True)
prob.run_model()
data = prob.check_partials(out_stream=None, method='cs')
x_error = data['comp']['f_xy', 'x']['rel error']
self.assertLess(x_error.forward, 1e-5)
def test_set_form_on_comp(self):
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
comp = prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
comp.set_check_partial_options(wrt='*', form='central')
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None, compact_print=True)
# This will fail unless you set the check_step.
x_error = data['comp']['f_xy', 'x']['rel error']
self.assertLess(x_error.forward, 1e-3)
def test_set_form_global(self):
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None, form='central')
# This will fail unless you set the check_step.
x_error = data['comp']['f_xy', 'x']['rel error']
self.assertLess(x_error.forward, 1e-3)
def test_set_step_calc_on_comp(self):
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
comp = prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
comp.set_check_partial_options(wrt='*', step_calc='rel')
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None, compact_print=True)
# This will fail unless you set the check_step.
x_error = data['comp']['f_xy', 'x']['rel error']
self.assertLess(x_error.forward, 3e-3)
def test_set_step_calc_global(self):
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None, step_calc='rel')
# This will fail unless you set the global step.
x_error = data['comp']['f_xy', 'x']['rel error']
self.assertLess(x_error.forward, 3e-3)
def test_set_check_option_precedence(self):
# Test that we omit derivs declared with dependent=False
class SimpleComp1(om.ExplicitComponent):
def setup(self):
self.add_input('ab', 13.0)
self.add_input('aba', 13.0)
self.add_input('ba', 13.0)
self.add_output('y', 13.0)
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
ab = inputs['ab']
aba = inputs['aba']
ba = inputs['ba']
outputs['y'] = ab**3 + aba**3 + ba**3
def compute_partials(self, inputs, partials):
ab = inputs['ab']
aba = inputs['aba']
ba = inputs['ba']
partials['y', 'ab'] = 3.0*ab**2
partials['y', 'aba'] = 3.0*aba**2
partials['y', 'ba'] = 3.0*ba**2
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('ab', 13.0))
prob.model.add_subsystem('p2', om.IndepVarComp('aba', 13.0))
prob.model.add_subsystem('p3', om.IndepVarComp('ba', 13.0))
comp = prob.model.add_subsystem('comp', SimpleComp1())
prob.model.connect('p1.ab', 'comp.ab')
prob.model.connect('p2.aba', 'comp.aba')
prob.model.connect('p3.ba', 'comp.ba')
prob.setup()
comp.set_check_partial_options(wrt='a*', step=1e-2)
comp.set_check_partial_options(wrt='*a', step=1e-4)
prob.run_model()
data = prob.check_partials(out_stream=None)
# Note 'aba' gets the better value from the second options call with the *a wildcard.
assert_near_equal(data['comp']['y', 'ab']['J_fd'][0][0], 507.3901, 1e-4)
assert_near_equal(data['comp']['y', 'aba']['J_fd'][0][0], 507.0039, 1e-4)
assert_near_equal(data['comp']['y', 'ba']['J_fd'][0][0], 507.0039, 1e-4)
def test_option_printing(self):
# Make sure we print the approximation type for each variable.
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
comp = prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
comp.set_check_partial_options(wrt='x', method='cs')
comp.set_check_partial_options(wrt='y', form='central')
prob.setup(check=False, force_alloc_complex=True)
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream)
lines = stream.getvalue().splitlines()
self.assertTrue('cs' in lines[5],
msg='Did you change the format for printing check derivs?')
self.assertTrue('fd' in lines[19],
msg='Did you change the format for printing check derivs?')
def test_set_check_partial_options_invalid(self):
import openmdao.api as om
from openmdao.core.tests.test_check_derivs import ParaboloidTricky
from openmdao.test_suite.components.paraboloid_mat_vec import ParaboloidMatVec
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
comp = prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.add_subsystem('comp2', ParaboloidMatVec())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.model.connect('comp.f_xy', 'comp2.x')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
# check invalid wrt
with self.assertRaises(ValueError) as cm:
comp.set_check_partial_options(wrt=np.array([1.0]))
self.assertEqual(str(cm.exception),
"'comp' <class ParaboloidTricky>: The value of 'wrt' must be a string or list of strings, but a "
"type of 'ndarray' was provided.")
# check invalid method
with self.assertRaises(ValueError) as cm:
comp.set_check_partial_options(wrt=['*'], method='foo')
self.assertEqual(str(cm.exception),
"'comp' <class ParaboloidTricky>: Method 'foo' is not supported, method must be one of ('fd', 'cs')")
# check invalid form
comp._declared_partial_checks = []
comp.set_check_partial_options(wrt=['*'], form='foo')
with self.assertRaises(ValueError) as cm:
prob.check_partials()
# The form options sometimes print out in different order.
msg = str(cm.exception)
self.assertTrue("'foo' is not a valid form of finite difference; "
"must be one of [" in msg, 'error message not correct.')
self.assertTrue('forward' in msg, 'error message not correct.')
self.assertTrue('backward' in msg, 'error message not correct.')
self.assertTrue('central' in msg, 'error message not correct.')
# check invalid step
with self.assertRaises(ValueError) as cm:
comp.set_check_partial_options(wrt=['*'], step='foo')
self.assertEqual(str(cm.exception),
"'comp' <class ParaboloidTricky>: The value of 'step' must be numeric, but 'foo' was specified.")
# check invalid step_calc
with self.assertRaises(ValueError) as cm:
comp.set_check_partial_options(wrt=['*'], step_calc='foo')
self.assertEqual(str(cm.exception),
"'comp' <class ParaboloidTricky>: The value of 'step_calc' must be one of ('abs', 'rel'), "
"but 'foo' was specified.")
# check invalid wrt
comp._declared_partial_checks = []
comp.set_check_partial_options(wrt=['x*', 'y', 'z', 'a*'])
with self.assertRaises(ValueError) as cm:
prob.check_partials()
self.assertEqual(str(cm.exception), "'comp' <class ParaboloidTricky>: Invalid 'wrt' variables specified "
"for check_partial options: ['z'].")
# check multiple invalid wrt
comp._declared_partial_checks = []
comp.set_check_partial_options(wrt=['a', 'b', 'c'])
with self.assertRaises(ValueError) as cm:
prob.check_partials()
self.assertEqual(str(cm.exception), "'comp' <class ParaboloidTricky>: Invalid 'wrt' variables specified "
"for check_partial options: ['a', 'b', 'c'].")
def test_compact_print_formatting(self):
class MyCompShortVarNames(om.ExplicitComponent):
def setup(self):
self.add_input('x1', 3.0)
self.add_input('x2', 5.0)
self.add_output('y', 5.5)
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['y'] = 3.0*inputs['x1'] + 4.0*inputs['x2']
def compute_partials(self, inputs, partials):
"""Intentionally incorrect derivative."""
J = partials
J['y', 'x1'] = np.array([4.0])
J['y', 'x2'] = np.array([40])
class MyCompLongVarNames(om.ExplicitComponent):
def setup(self):
self.add_input('really_long_variable_name_x1', 3.0)
self.add_input('x2', 5.0)
self.add_output('really_long_variable_name_y', 5.5)
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['really_long_variable_name_y'] = \
3.0*inputs['really_long_variable_name_x1'] + 4.0*inputs['x2']
def compute_partials(self, inputs, partials):
"""Intentionally incorrect derivative."""
J = partials
J['really_long_variable_name_y', 'really_long_variable_name_x1'] = np.array([4.0])
J['really_long_variable_name_y', 'x2'] = np.array([40])
# First short var names
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x1', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('x2', 5.0))
prob.model.add_subsystem('comp', MyCompShortVarNames())
prob.model.connect('p1.x1', 'comp.x1')
prob.model.connect('p2.x2', 'comp.x2')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=True)
lines = stream.getvalue().splitlines()
# Check to make sure all the header and value lines have their columns lined up
header_locations_of_bars = None
sep = '|'
for line in lines:
if sep in line:
if header_locations_of_bars:
value_locations_of_bars = [i for i, ltr in enumerate(line) if ltr == sep]
self.assertEqual(value_locations_of_bars, header_locations_of_bars,
msg="Column separators should all be aligned")
else:
header_locations_of_bars = [i for i, ltr in enumerate(line) if ltr == sep]
# Then long var names
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('really_long_variable_name_x1', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('x2', 5.0))
prob.model.add_subsystem('comp', MyCompLongVarNames())
prob.model.connect('p1.really_long_variable_name_x1', 'comp.really_long_variable_name_x1')
prob.model.connect('p2.x2', 'comp.x2')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=True)
lines = stream.getvalue().splitlines()
# Check to make sure all the header and value lines have their columns lined up
header_locations_of_bars = None
sep = '|'
for line in lines:
if sep in line:
if header_locations_of_bars:
value_locations_of_bars = [i for i, ltr in enumerate(line) if ltr == sep]
self.assertEqual(value_locations_of_bars, header_locations_of_bars,
msg="Column separators should all be aligned")
else:
header_locations_of_bars = [i for i, ltr in enumerate(line) if ltr == sep]
def test_compact_print_exceed_tol(self):
prob = om.Problem()
prob.model = MyCompGoodPartials()
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=True)
self.assertEqual(stream.getvalue().count('>ABS_TOL'), 0)
self.assertEqual(stream.getvalue().count('>REL_TOL'), 0)
prob = om.Problem()
prob.model = MyCompBadPartials()
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=True)
self.assertEqual(stream.getvalue().count('>ABS_TOL'), 2)
self.assertEqual(stream.getvalue().count('>REL_TOL'), 2)
def test_check_partials_display_rev(self):
# 1: Check display of revs for implicit comp for compact and non-compact display
group = om.Group()
comp1 = group.add_subsystem('comp1', om.IndepVarComp())
comp1.add_output('a', 1.0)
comp1.add_output('b', -4.0)
comp1.add_output('c', 3.0)
group.add_subsystem('comp2', QuadraticLinearize())
group.add_subsystem('comp3', QuadraticJacVec())
group.connect('comp1.a', 'comp2.a')
group.connect('comp1.b', 'comp2.b')
group.connect('comp1.c', 'comp2.c')
group.connect('comp1.a', 'comp3.a')
group.connect('comp1.b', 'comp3.b')
group.connect('comp1.c', 'comp3.c')
prob = om.Problem(model=group)
prob.setup()
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=True)
self.assertEqual(stream.getvalue().count('n/a'), 25)
self.assertEqual(stream.getvalue().count('rev'), 15)
self.assertEqual(stream.getvalue().count('Component'), 2)
self.assertEqual(stream.getvalue().count('wrt'), 12)
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=False)
self.assertEqual(stream.getvalue().count('Reverse Magnitude'), 4)
self.assertEqual(stream.getvalue().count('Raw Reverse Derivative'), 4)
self.assertEqual(stream.getvalue().count('Jrev'), 20)
# 2: Explicit comp, all comps define Jacobians for compact and non-compact display
class MyComp(om.ExplicitComponent):
def setup(self):
self.add_input('x1', 3.0)
self.add_input('x2', 5.0)
self.add_output('z', 5.5)
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['z'] = 3.0 * inputs['x1'] + -4444.0 * inputs['x2']
def compute_partials(self, inputs, partials):
"""Correct derivative."""
J = partials
J['z', 'x1'] = np.array([3.0])
J['z', 'x2'] = np.array([-4444.0])
prob = om.Problem()
prob.model = MyComp()
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=True)
self.assertEqual(stream.getvalue().count('rev'), 0)
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=False)
# So for this case, they do all provide them, so rev should not be shown
self.assertEqual(stream.getvalue().count('Analytic Magnitude'), 2)
self.assertEqual(stream.getvalue().count('Forward Magnitude'), 0)
self.assertEqual(stream.getvalue().count('Reverse Magnitude'), 0)
self.assertEqual(stream.getvalue().count('Absolute Error'), 2)
self.assertEqual(stream.getvalue().count('Relative Error'), 2)
self.assertEqual(stream.getvalue().count('Raw Analytic Derivative'), 2)
self.assertEqual(stream.getvalue().count('Raw Forward Derivative'), 0)
self.assertEqual(stream.getvalue().count('Raw Reverse Derivative'), 0)
self.assertEqual(stream.getvalue().count('Raw FD Derivative'), 2)
# 3: Explicit comp that does not define Jacobian. It defines compute_jacvec_product
# For both compact and non-compact display
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
prob.model.add_subsystem('comp', ParaboloidMatVec())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=True)
self.assertEqual(stream.getvalue().count('rev'), 10)
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=False)
self.assertEqual(stream.getvalue().count('Reverse'), 4)
self.assertEqual(stream.getvalue().count('Jrev'), 10)
# 4: Mixed comps. Some with jacobians. Some not
prob = om.Problem()
prob.model.add_subsystem('p0', om.IndepVarComp('x1', 3.0))
prob.model.add_subsystem('p1', om.IndepVarComp('x2', 5.0))
prob.model.add_subsystem('c0', MyComp()) # in x1,x2, out is z
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
prob.model.add_subsystem('comp', ParaboloidMatVec())
prob.model.connect('p0.x1', 'c0.x1')
prob.model.connect('p1.x2', 'c0.x2')
prob.model.connect('c0.z', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=True)
self.assertEqual(stream.getvalue().count('n/a'), 10)
self.assertEqual(stream.getvalue().count('rev'), 15)
self.assertEqual(stream.getvalue().count('Component'), 2)
self.assertEqual(stream.getvalue().count('wrt'), 8)
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=False)
self.assertEqual(stream.getvalue().count('Analytic Magnitude'), 2)
self.assertEqual(stream.getvalue().count('Forward Magnitude'), 2)
self.assertEqual(stream.getvalue().count('Reverse Magnitude'), 2)
self.assertEqual(stream.getvalue().count('Absolute Error'), 8)
self.assertEqual(stream.getvalue().count('Relative Error'), 8)
self.assertEqual(stream.getvalue().count('Raw Analytic Derivative'), 2)
self.assertEqual(stream.getvalue().count('Raw Forward Derivative'), 2)
self.assertEqual(stream.getvalue().count('Raw Reverse Derivative'), 2)
self.assertEqual(stream.getvalue().count('Raw FD Derivative'), 4)
# 5: One comp defines compute_multi_jacvec_product
size = 6
prob = om.Problem()
model = prob.model
model.add_subsystem('px', om.IndepVarComp('x', val=(np.arange(size, dtype=float) + 1.) * 3.0))
model.add_subsystem('py', om.IndepVarComp('y', val=(np.arange(size, dtype=float) + 1.) * 2.0))
model.add_subsystem('comp', MultiJacVec(size))
model.connect('px.x', 'comp.x')
model.connect('py.y', 'comp.y')
model.add_design_var('px.x', vectorize_derivs=False)
model.add_design_var('py.y', vectorize_derivs=False)
model.add_constraint('comp.f_xy', vectorize_derivs=False)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=True)
self.assertEqual(stream.getvalue().count('rev'), 10)
def test_check_partials_worst_subjac(self):
# The first is printing the worst subjac at the bottom of the output. Worst is defined by
# looking at the fwd and rev columns of the relative error (i.e., the 2nd and 3rd last
# columns) of the compact_print=True output. We should print the component name, then
# repeat the full row for the worst-case subjac (i.e., output-input pair).
# This should only occur in the compact_print=True case.
prob = om.Problem()
prob.model.add_subsystem('p0', om.IndepVarComp('x1', 3.0))
prob.model.add_subsystem('p1', om.IndepVarComp('x2', 5.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y2', 6.0))
prob.model.add_subsystem('good', MyCompGoodPartials())
prob.model.add_subsystem('bad', MyCompBadPartials())
prob.model.connect('p0.x1', 'good.x1')
prob.model.connect('p1.x2', 'good.x2')
prob.model.connect('good.y', 'bad.y1')
prob.model.connect('p2.y2', 'bad.y2')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_partials(out_stream=stream, compact_print=True)
self.assertEqual(stream.getvalue().count("'z' wrt 'y1'"), 2)
def test_check_partials_show_only_incorrect(self):
# The second is adding an option to show only the incorrect subjacs
# (according to abs_err_tol and rel_err_tol), called
# show_only_incorrect. This should be False by default, but when True,
# it should print only the subjacs found to be incorrect. This applies
# to both compact_print=True and False.
prob = om.Problem()
prob.model.add_subsystem('p0', om.IndepVarComp('x1', 3.0))
prob.model.add_subsystem('p1', om.IndepVarComp('x2', 5.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y2', 6.0))
prob.model.add_subsystem('good', MyCompGoodPartials())
prob.model.add_subsystem('bad', MyCompBadPartials())
prob.model.connect('p0.x1', 'good.x1')
prob.model.connect('p1.x2', 'good.x2')
prob.model.connect('good.y', 'bad.y1')
prob.model.connect('p2.y2', 'bad.y2')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
stream = StringIO()
# prob.check_partials(compact_print=True,show_only_incorrect=False)
prob.check_partials(out_stream=stream, compact_print=True, show_only_incorrect=True)
self.assertEqual(stream.getvalue().count("MyCompBadPartials"), 2)
self.assertEqual(stream.getvalue().count("'z' wrt 'y1'"), 2)
self.assertEqual(stream.getvalue().count("MyCompGoodPartials"), 0)
stream = StringIO()
prob.check_partials(compact_print=False, show_only_incorrect=False)
prob.check_partials(out_stream=stream, compact_print=False, show_only_incorrect=True)
self.assertEqual(stream.getvalue().count("MyCompGoodPartials"), 0)
self.assertEqual(stream.getvalue().count("MyCompBadPartials"), 1)
def test_includes_excludes(self):
prob = om.Problem()
model = prob.model
sub = model.add_subsystem('c1c', om.Group())
sub.add_subsystem('d1', Paraboloid())
sub.add_subsystem('e1', Paraboloid())
sub2 = model.add_subsystem('sss', om.Group())
sub3 = sub2.add_subsystem('sss2', om.Group())
sub2.add_subsystem('d1', Paraboloid())
sub3.add_subsystem('e1', Paraboloid())
model.add_subsystem('abc1cab', Paraboloid())
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None, includes='*c*c*')
self.assertEqual(len(data), 3)
self.assertTrue('c1c.d1' in data)
self.assertTrue('c1c.e1' in data)
self.assertTrue('abc1cab' in data)
data = prob.check_partials(out_stream=None, includes=['*d1', '*e1'])
self.assertEqual(len(data), 4)
self.assertTrue('c1c.d1' in data)
self.assertTrue('c1c.e1' in data)
self.assertTrue('sss.d1' in data)
self.assertTrue('sss.sss2.e1' in data)
data = prob.check_partials(out_stream=None, includes=['abc1cab'])
self.assertEqual(len(data), 1)
self.assertTrue('abc1cab' in data)
data = prob.check_partials(out_stream=None, includes='*c*c*', excludes=['*e*'])
self.assertEqual(len(data), 2)
self.assertTrue('c1c.d1' in data)
self.assertTrue('abc1cab' in data)
def test_directional_derivative_option(self):
prob = om.Problem()
model = prob.model
mycomp = model.add_subsystem('mycomp', ArrayComp(), promotes=['*'])
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None)
# Note on why we run 10 times:
# 1 - Initial execution
# 2~3 - Called apply_nonlinear at the start of fwd and rev analytic deriv calculations
# 4 - Called apply_nonlinear to clean up before starting FD
# 5~8 - FD wrt bb, non-directional
# 9 - FD wrt x1, directional
# 10 - FD wrt x2, directional
self.assertEqual(mycomp.exec_count, 10)
assert_check_partials(data, atol=1.0E-8, rtol=1.0E-8)
stream = StringIO()
J = prob.check_partials(out_stream=stream, compact_print=True)
output = stream.getvalue()
self.assertTrue("(d)'x1'" in output)
self.assertTrue("(d)'x2'" in output)
def test_directional_derivative_option_complex_step(self):
class ArrayCompCS(ArrayComp):
def setup(self):
super().setup()
self.set_check_partial_options('x*', directional=True, method='cs')
prob = om.Problem()
model = prob.model
mycomp = model.add_subsystem('mycomp', ArrayCompCS(), promotes=['*'])
np.random.seed(1)
prob.setup(check=False, force_alloc_complex=True)
prob.run_model()
data = prob.check_partials(method='cs', out_stream=None)
# Note on why we run 10 times:
# 1 - Initial execution
# 2~3 - Called apply_nonlinear at the start of fwd and rev analytic deriv calculations
# 4 - Called apply_nonlinear to clean up before starting FD
# 5~8 - FD wrt bb, non-directional
# 9 - FD wrt x1, directional
# 10 - FD wrt x2, directional
self.assertEqual(mycomp.exec_count, 10)
assert_check_partials(data, atol=1.0E-8, rtol=1.0E-8)
def test_directional_vectorized_matrix_free(self):
class TestDirectional(om.ExplicitComponent):
def initialize(self):
self.options.declare('n',default=1, desc='vector size')
self.n_compute = 0
self.n_fwd = 0
self.n_rev = 0
def setup(self):
self.add_input('in',shape=self.options['n'])
self.add_output('out',shape=self.options['n'])
self.set_check_partial_options(wrt='*', directional=True, method='cs')
def compute(self,inputs,outputs):
self.n_compute += 1
fac = 2.0 + np.arange(self.options['n'])
outputs['out'] = fac * inputs['in']
def compute_jacvec_product(self,inputs,d_inputs,d_outputs, mode):
fac = 2.0 + np.arange(self.options['n'])
if mode == 'fwd':
if 'out' in d_outputs:
if 'in' in d_inputs:
d_outputs['out'] = fac * d_inputs['in']
self.n_fwd += 1
if mode == 'rev':
if 'out' in d_outputs:
if 'in' in d_inputs:
d_inputs['in'] = fac * d_outputs['out']
self.n_rev += 1
prob = om.Problem()
model = prob.model
np.random.seed(1)
comp = TestDirectional(n=5)
model.add_subsystem('comp', comp)
prob.setup(force_alloc_complex=True)
prob.run_model()
J = prob.check_partials(method='cs', out_stream=None)
assert_check_partials(J)
self.assertEqual(comp.n_fwd, 1)
self.assertEqual(comp.n_rev, 1)
# Compact print needs to print the dot-product test.
stream = StringIO()
J = prob.check_partials(method='cs', out_stream=stream, compact_print=True)
lines = stream.getvalue().splitlines()
self.assertEqual(lines[6][43:46], 'n/a')
assert_near_equal(float(lines[6][95:105]), 0.0, 1e-15)
def test_directional_mixed_matrix_free(self):
class ArrayCompMatrixFree(om.ExplicitComponent):
def setup(self):
J1 = np.array([[1.0, 3.0, -2.0, 7.0],
[6.0, 2.5, 2.0, 4.0],
[-1.0, 0.0, 8.0, 1.0],
[1.0, 4.0, -5.0, 6.0]])
self.J1 = J1
self.J2 = J1 * 3.3
self.Jb = J1.T
# Inputs
self.add_input('x1', np.zeros([4]))
self.add_input('x2', np.zeros([4]))
self.add_input('bb', np.zeros([4]))
# Outputs
self.add_output('y1', np.zeros([4]))
self.declare_partials(of='*', wrt='*')
self.set_check_partial_options('*', directional=True, method='fd')
def compute(self, inputs, outputs):
"""
Execution.
"""
outputs['y1'] = self.J1.dot(inputs['x1']) + self.J2.dot(inputs['x2']) + self.Jb.dot(inputs['bb'])
def compute_jacvec_product(self, inputs, dinputs, doutputs, mode):
"""Returns the product of the incoming vector with the Jacobian."""
if mode == 'fwd':
if 'x1' in dinputs:
doutputs['y1'] += self.J1.dot(dinputs['x1'])
if 'x2' in dinputs:
doutputs['y1'] += self.J2.dot(dinputs['x2'])
if 'bb' in dinputs:
doutputs['y1'] += self.Jb.dot(dinputs['bb'])
elif mode == 'rev':
if 'x1' in dinputs:
dinputs['x1'] += self.J1.T.dot(doutputs['y1'])
if 'x2' in dinputs:
dinputs['x2'] += self.J2.T.dot(doutputs['y1'])
if 'bb' in dinputs:
dinputs['bb'] += self.Jb.T.dot(doutputs['y1'])
prob = om.Problem()
model = prob.model
mycomp = model.add_subsystem('mycomp', ArrayCompMatrixFree(), promotes=['*'])
np.random.seed(1)
prob.setup()
prob.run_model()
J = prob.check_partials(method='fd', out_stream=None)
assert_check_partials(J)
def test_directional_mixed_matrix_free_central_diff(self):
class ArrayCompMatrixFree(om.ExplicitComponent):
def setup(self):
J1 = np.array([[1.0, 3.0, -2.0, 7.0],
[6.0, 2.5, 2.0, 4.0],
[-1.0, 0.0, 8.0, 1.0],
[1.0, 4.0, -5.0, 6.0]])
self.J1 = J1
self.J2 = J1 * 3.3
self.Jb = J1.T
# Inputs
self.add_input('x1', np.zeros([4]))
self.add_input('x2', np.zeros([4]))
self.add_input('bb', np.zeros([4]))
# Outputs
self.add_output('y1', np.zeros([4]))
self.declare_partials(of='*', wrt='*')
self.set_check_partial_options('*', directional=True, method='fd', form='central')
def compute(self, inputs, outputs):
"""
Execution.
"""
outputs['y1'] = self.J1.dot(inputs['x1']) + self.J2.dot(inputs['x2']) + self.Jb.dot(inputs['bb'])
def compute_jacvec_product(self, inputs, dinputs, doutputs, mode):
"""Returns the product of the incoming vector with the Jacobian."""
if mode == 'fwd':
if 'x1' in dinputs:
doutputs['y1'] += self.J1.dot(dinputs['x1'])
if 'x2' in dinputs:
doutputs['y1'] += self.J2.dot(dinputs['x2'])
if 'bb' in dinputs:
doutputs['y1'] += self.Jb.dot(dinputs['bb'])
elif mode == 'rev':
if 'x1' in dinputs:
dinputs['x1'] += self.J1.T.dot(doutputs['y1'])
if 'x2' in dinputs:
dinputs['x2'] += self.J2.T.dot(doutputs['y1'])
if 'bb' in dinputs:
dinputs['bb'] += self.Jb.T.dot(doutputs['y1'])
prob = om.Problem()
model = prob.model
mycomp = model.add_subsystem('mycomp', ArrayCompMatrixFree(), promotes=['*'])
np.random.seed(1)
prob.setup()
prob.run_model()
J = prob.check_partials(method='fd', out_stream=None)
assert_check_partials(J)
def test_directional_vectorized(self):
class TestDirectional(om.ExplicitComponent):
def initialize(self):
self.options.declare('n',default=1, desc='vector size')
self.n_compute = 0
self.n_fwd = 0
self.n_rev = 0
def setup(self):
self.add_input('in',shape=self.options['n'])
self.add_output('out',shape=self.options['n'])
self.declare_partials('out', 'in')
self.set_check_partial_options(wrt='*', directional=True, method='cs')
def compute(self,inputs,outputs):
self.n_compute += 1
fac = 2.0 + np.arange(self.options['n'])
outputs['out'] = fac * inputs['in']
def compute_partials(self, inputs, partials):
partials['out', 'in'] = np.diag(2.0 + np.arange(self.options['n']))
prob = om.Problem()
model = prob.model
np.random.seed(1)
comp = TestDirectional(n=5)
model.add_subsystem('comp', comp)
prob.setup(force_alloc_complex=True)
prob.run_model()
J = prob.check_partials(method='cs', out_stream=None)
assert_check_partials(J)
def test_directional_mixed_error_message(self):
import openmdao.api as om
class ArrayCompMatrixFree(om.ExplicitComponent):
def setup(self):
J1 = np.array([[1.0, 3.0, -2.0, 7.0],
[6.0, 2.5, 2.0, 4.0],
[-1.0, 0.0, 8.0, 1.0],
[1.0, 4.0, -5.0, 6.0]])
self.J1 = J1
self.J2 = J1 * 3.3
self.Jb = J1.T
# Inputs
self.add_input('x1', np.zeros([4]))
self.add_input('x2', np.zeros([4]))
self.add_input('bb', np.zeros([4]))
# Outputs
self.add_output('y1', np.zeros([4]))
self.declare_partials(of='*', wrt='*')
self.set_check_partial_options('x*', directional=True, method='fd')
def compute(self, inputs, outputs):
"""
Execution.
"""
outputs['y1'] = self.J1.dot(inputs['x1']) + self.J2.dot(inputs['x2']) + self.Jb.dot(inputs['bb'])
def compute_jacvec_product(self, inputs, dinputs, doutputs, mode):
"""Returns the product of the incoming vector with the Jacobian."""
if mode == 'fwd':
if 'x1' in dinputs:
doutputs['y1'] += self.J1.dot(dinputs['x1'])
if 'x2' in dinputs:
doutputs['y1'] += self.J2.dot(dinputs['x2'])
if 'bb' in dinputs:
doutputs['y1'] += self.Jb.dot(dinputs['bb'])
elif mode == 'rev':
if 'x1' in dinputs:
dinputs['x1'] += self.J1.T.dot(doutputs['y1'])
if 'x2' in dinputs:
dinputs['x2'] += self.J2.T.dot(doutputs['y1'])
if 'bb' in dinputs:
dinputs['bb'] += self.Jb.T.dot(doutputs['y1'])
prob = om.Problem()
model = prob.model
mycomp = model.add_subsystem('mycomp', ArrayCompMatrixFree(), promotes=['*'])
prob.setup()
prob.run_model()
with self.assertRaises(ValueError) as cm:
J = prob.check_partials(method='fd', out_stream=None)
msg = "'mycomp' <class ArrayCompMatrixFree>: For matrix free components, directional should be set to True for all inputs."
self.assertEqual(str(cm.exception), msg)
def test_directional_mimo(self):
class DirectionalComp(om.ExplicitComponent):
def initialize(self):
self.options.declare('n', default=1, desc='vector size')
def setup(self):
n = self.options['n']
self.add_input('in', shape=n)
self.add_input('in2', shape=n)
self.add_output('out', shape=n)
self.add_output('out2', shape=n)
self.set_check_partial_options(wrt='*', directional=True, method='cs')
self.mat = np.random.rand(n, n)
self.mat2 = np.random.rand(n, n)
def compute(self,inputs,outputs):
outputs['out'] = self.mat.dot(inputs['in']) + self.mat2.dot(inputs['in2'])
outputs['out2'] = 2.0 * self.mat.dot(inputs['in']) - self.mat2.dot(inputs['in2'])
def compute_jacvec_product(self,inputs,d_inputs,d_outputs, mode):
if mode == 'fwd':
if 'out' in d_outputs:
if 'in' in d_inputs:
d_outputs['out'] += self.mat.dot(d_inputs['in'])
if 'in2' in d_inputs:
d_outputs['out'] += self.mat2.dot(d_inputs['in2'])
if 'out2' in d_outputs:
if 'in' in d_inputs:
d_outputs['out2'] += 2.0 * self.mat.dot(d_inputs['in'])
if 'in2' in d_inputs:
d_outputs['out2'] += -1.0 * self.mat2.dot(d_inputs['in2'])
if mode == 'rev':
if 'out' in d_outputs:
if 'in' in d_inputs:
d_inputs['in'] += self.mat.transpose().dot(d_outputs['out'])
if 'in2' in d_inputs:
d_inputs['in2'] += self.mat2.transpose().dot(d_outputs['out'])
if 'out2' in d_outputs:
if 'in' in d_inputs:
# This one is wrong in reverse.
d_inputs['in'] += 999.0 * self.mat.transpose().dot(d_outputs['out2'])
if 'in2' in d_inputs:
d_inputs['in2'] += -1.0 * self.mat2.transpose().dot(d_outputs['out2'])
prob = om.Problem()
comp = DirectionalComp(n=2)
prob.model.add_subsystem('comp', comp)
prob.setup(force_alloc_complex=True)
prob.run_model()
partials = prob.check_partials(method='cs', out_stream=None)
self.assertGreater(np.abs(partials['comp']['out2', 'in']['directional_fwd_rev']),
1e-3, msg='Reverse deriv is supposed to be wrong.')
assert_near_equal(np.abs(partials['comp']['out', 'in']['directional_fwd_rev']),
0.0, 1e-12)
assert_near_equal(np.abs(partials['comp']['out', 'in2']['directional_fwd_rev']),
0.0, 1e-12)
assert_near_equal(np.abs(partials['comp']['out2', 'in2']['directional_fwd_rev']),
0.0, 1e-12)
def test_bug_local_method(self):
# This fixes a bug setting the check method on a component overrode the requested method for
# subsequent components.
prob = om.Problem()
model = prob.model
model.add_subsystem('comp1', Paraboloid())
fdcomp = model.add_subsystem('comp2', Paraboloid())
model.add_subsystem('comp3', Paraboloid())
fdcomp.set_check_partial_options(wrt='*', method='fd')
prob.setup(check=False, force_alloc_complex=True)
prob.set_solver_print(level=0)
prob.run_model()
data = prob.check_partials(method='cs', out_stream=None)
# Comp1 and Comp3 are complex step, so have tighter tolerances.
for key, val in data['comp1'].items():
assert_near_equal(val['rel error'][0], 0.0, 1e-15)
for key, val in data['comp2'].items():
assert_near_equal(val['rel error'][0], 0.0, 1e-6)
for key, val in data['comp3'].items():
assert_near_equal(val['rel error'][0], 0.0, 1e-15)
def test_rel_error_fd_zero(self):
# When the fd turns out to be zero, test that we switch the definition of relative
# to divide by the forward derivative instead of reporting NaN.
class SimpleComp2(om.ExplicitComponent):
def setup(self):
self.add_input('x', val=3.0)
self.add_output('y', val=4.0)
self.declare_partials(of='y', wrt='x')
def compute(self, inputs, outputs):
# Mimics forgetting to set a variable.
pass
def compute_partials(self, inputs, partials):
partials['y', 'x'] = 3.0
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.5))
prob.model.add_subsystem('comp', SimpleComp2())
prob.model.connect('p1.x', 'comp.x')
prob.setup()
stream = StringIO()
data = prob.check_partials(out_stream=stream)
lines = stream.getvalue().splitlines()
self.assertTrue("Relative Error (Jan - Jfd) / Jan : 1." in lines[8])
def test_directional_bug_implicit(self):
# Test for bug in directional derivative direction for implicit var and matrix-free.
class Directional(om.ImplicitComponent):
def setup(self):
self.add_input('in',shape=3)
self.add_input('in2',shape=3)
self.add_output('out',shape=3)
self.set_check_partial_options(wrt='*', directional=True, method='cs')
self.mat = np.random.rand(3, 3)
self.mat2 = np.random.rand(3, 3)
def apply_nonlinear(self, inputs, outputs, residuals):
residuals['out'] = self.mat.dot(inputs['in']) + self.mat2.dot(inputs['in2']) - outputs['out']
def apply_linear(self, inputs, outputs, d_inputs, d_outputs, d_residuals, mode):
if mode == 'fwd':
if 'out' in d_residuals:
if 'in' in d_inputs:
d_residuals['out'] += self.mat.dot(d_inputs['in'])
if 'in2' in d_inputs:
d_residuals['out'] += self.mat2.dot(d_inputs['in2'])
if 'out' in d_outputs:
d_residuals['out'] -= d_outputs['out']
if mode == 'rev':
if 'out' in d_residuals:
if 'in' in d_inputs:
d_inputs['in'] += self.mat.transpose().dot(d_residuals['out'])
if 'in2' in d_inputs:
d_inputs['in2'] += self.mat2.transpose().dot(d_residuals['out'])
if 'out' in d_outputs:
d_outputs['out'] -= d_residuals['out']
prob = om.Problem()
comp = Directional()
prob.model.add_subsystem('comp',comp)
prob.setup(force_alloc_complex=True)
prob.run_model()
partials = prob.check_partials(method='cs', out_stream=None)
assert_check_partials(partials)
class TestCheckPartialsFeature(unittest.TestCase):
def test_feature_incorrect_jacobian(self):
import numpy as np
import openmdao.api as om
class MyComp(om.ExplicitComponent):
def setup(self):
self.add_input('x1', 3.0)
self.add_input('x2', 5.0)
self.add_output('y', 5.5)
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['y'] = 3.0*inputs['x1'] + 4.0*inputs['x2']
def compute_partials(self, inputs, partials):
"""Intentionally incorrect derivative."""
J = partials
J['y', 'x1'] = np.array([4.0])
J['y', 'x2'] = np.array([40])
prob = om.Problem()
prob.model.add_subsystem('comp', MyComp())
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
data = prob.check_partials()
x1_error = data['comp']['y', 'x1']['abs error']
assert_near_equal(x1_error.forward, 1., 1e-8)
x2_error = data['comp']['y', 'x2']['rel error']
assert_near_equal(x2_error.forward, 9., 1e-8)
def test_feature_check_partials_suppress(self):
import numpy as np
import openmdao.api as om
class MyComp(om.ExplicitComponent):
def setup(self):
self.add_input('x1', 3.0)
self.add_input('x2', 5.0)
self.add_output('y', 5.5)
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['y'] = 3.0*inputs['x1'] + 4.0*inputs['x2']
def compute_partials(self, inputs, partials):
"""Intentionally incorrect derivative."""
J = partials
J['y', 'x1'] = np.array([4.0])
J['y', 'x2'] = np.array([40])
prob = om.Problem()
prob.model.add_subsystem('comp', MyComp())
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None, compact_print=True)
print(data)
def test_set_step_on_comp(self):
import openmdao.api as om
from openmdao.core.tests.test_check_derivs import ParaboloidTricky
from openmdao.test_suite.components.paraboloid_mat_vec import ParaboloidMatVec
prob = om.Problem()
comp = prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.add_subsystem('comp2', ParaboloidMatVec())
prob.model.connect('comp.f_xy', 'comp2.x')
prob.set_solver_print(level=0)
comp.set_check_partial_options(wrt='*', step=1e-2)
prob.setup()
prob.run_model()
prob.check_partials(compact_print=True)
def test_set_step_global(self):
import openmdao.api as om
from openmdao.core.tests.test_check_derivs import ParaboloidTricky
from openmdao.test_suite.components.paraboloid_mat_vec import ParaboloidMatVec
prob = om.Problem()
prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.add_subsystem('comp2', ParaboloidMatVec())
prob.model.connect('comp.f_xy', 'comp2.x')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
prob.check_partials(step=1e-2, compact_print=True)
def test_set_method_on_comp(self):
import openmdao.api as om
from openmdao.core.tests.test_check_derivs import ParaboloidTricky
from openmdao.test_suite.components.paraboloid_mat_vec import ParaboloidMatVec
prob = om.Problem()
comp = prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.add_subsystem('comp2', ParaboloidMatVec())
prob.model.connect('comp.f_xy', 'comp2.x')
prob.set_solver_print(level=0)
comp.set_check_partial_options(wrt='*', method='cs')
prob.setup(force_alloc_complex=True)
prob.run_model()
prob.check_partials(compact_print=True)
def test_set_method_global(self):
import openmdao.api as om
from openmdao.core.tests.test_check_derivs import ParaboloidTricky
from openmdao.test_suite.components.paraboloid_mat_vec import ParaboloidMatVec
prob = om.Problem()
prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.add_subsystem('comp2', ParaboloidMatVec())
prob.model.connect('comp.f_xy', 'comp2.x')
prob.set_solver_print(level=0)
prob.setup(force_alloc_complex=True)
prob.run_model()
prob.check_partials(method='cs', compact_print=True)
def test_set_form_global(self):
import openmdao.api as om
from openmdao.core.tests.test_check_derivs import ParaboloidTricky
from openmdao.test_suite.components.paraboloid_mat_vec import ParaboloidMatVec
prob = om.Problem()
prob.model.add_subsystem('comp', ParaboloidTricky())
prob.model.add_subsystem('comp2', ParaboloidMatVec())
prob.model.connect('comp.f_xy', 'comp2.x')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
prob.check_partials(form='central', compact_print=True)
def test_set_step_calc_global(self):
import openmdao.api as om
from openmdao.core.tests.test_check_derivs import ParaboloidTricky
prob = om.Problem()
prob.model.add_subsystem('comp', ParaboloidTricky())
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
prob.check_partials(step_calc='rel', compact_print=True)
def test_feature_check_partials_show_only_incorrect(self):
import numpy as np
import openmdao.api as om
class MyCompGoodPartials(om.ExplicitComponent):
def setup(self):
self.add_input('x1', 3.0)
self.add_input('x2', 5.0)
self.add_output('y', 5.5)
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['y'] = 3.0 * inputs['x1'] + 4.0 * inputs['x2']
def compute_partials(self, inputs, partials):
"""Correct derivative."""
J = partials
J['y', 'x1'] = np.array([3.0])
J['y', 'x2'] = np.array([4.0])
class MyCompBadPartials(om.ExplicitComponent):
def setup(self):
self.add_input('y1', 3.0)
self.add_input('y2', 5.0)
self.add_output('z', 5.5)
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
outputs['z'] = 3.0 * inputs['y1'] + 4.0 * inputs['y2']
def compute_partials(self, inputs, partials):
"""Intentionally incorrect derivative."""
J = partials
J['z', 'y1'] = np.array([33.0])
J['z', 'y2'] = np.array([40.0])
prob = om.Problem()
prob.model.add_subsystem('good', MyCompGoodPartials())
prob.model.add_subsystem('bad', MyCompBadPartials())
prob.model.connect('good.y', 'bad.y1')
prob.set_solver_print(level=0)
prob.setup()
prob.run_model()
prob.check_partials(compact_print=True, show_only_incorrect=True)
prob.check_partials(compact_print=False, show_only_incorrect=True)
def test_includes_excludes(self):
import openmdao.api as om
from openmdao.test_suite.components.paraboloid import Paraboloid
prob = om.Problem()
model = prob.model
sub = model.add_subsystem('c1c', om.Group())
sub.add_subsystem('d1', Paraboloid())
sub.add_subsystem('e1', Paraboloid())
sub2 = model.add_subsystem('sss', om.Group())
sub3 = sub2.add_subsystem('sss2', om.Group())
sub2.add_subsystem('d1', Paraboloid())
sub3.add_subsystem('e1', Paraboloid())
model.add_subsystem('abc1cab', Paraboloid())
prob.setup()
prob.run_model()
prob.check_partials(compact_print=True, includes='*c*c*')
prob.check_partials(compact_print=True, includes=['*d1', '*e1'])
prob.check_partials(compact_print=True, includes=['abc1cab'])
prob.check_partials(compact_print=True, includes='*c*c*', excludes=['*e*'])
def test_directional(self):
import openmdao.api as om
from openmdao.test_suite.components.array_comp import ArrayComp
prob = om.Problem()
model = prob.model
mycomp = model.add_subsystem('mycomp', ArrayComp(), promotes=['*'])
prob.setup()
prob.run_model()
data = prob.check_partials()
def test_directional_matrix_free(self):
import numpy as np
import openmdao.api as om
class ArrayCompMatrixFree(om.ExplicitComponent):
def setup(self):
J1 = np.array([[1.0, 3.0, -2.0, 7.0],
[6.0, 2.5, 2.0, 4.0],
[-1.0, 0.0, 8.0, 1.0],
[1.0, 4.0, -5.0, 6.0]])
self.J1 = J1
self.J2 = J1 * 3.3
self.Jb = J1.T
# Inputs
self.add_input('x1', np.zeros([4]))
self.add_input('x2', np.zeros([4]))
self.add_input('bb', np.zeros([4]))
# Outputs
self.add_output('y1', np.zeros([4]))
self.declare_partials(of='*', wrt='*')
self.set_check_partial_options('*', directional=True)
def compute(self, inputs, outputs):
"""
Execution.
"""
outputs['y1'] = self.J1.dot(inputs['x1']) + self.J2.dot(inputs['x2']) + self.Jb.dot(inputs['bb'])
def compute_jacvec_product(self, inputs, dinputs, doutputs, mode):
"""Returns the product of the incoming vector with the Jacobian."""
if mode == 'fwd':
if 'x1' in dinputs:
doutputs['y1'] += self.J1.dot(dinputs['x1'])
if 'x2' in dinputs:
doutputs['y1'] += self.J2.dot(dinputs['x2'])
if 'bb' in dinputs:
doutputs['y1'] += self.Jb.dot(dinputs['bb'])
elif mode == 'rev':
if 'x1' in dinputs:
dinputs['x1'] += self.J1.T.dot(doutputs['y1'])
if 'x2' in dinputs:
dinputs['x2'] += self.J2.T.dot(doutputs['y1'])
if 'bb' in dinputs:
dinputs['bb'] += self.Jb.T.dot(doutputs['y1'])
prob = om.Problem()
model = prob.model
model.add_subsystem('mycomp', ArrayCompMatrixFree(), promotes=['*'])
prob.setup()
prob.run_model()
data = prob.check_partials()
def test_set_method_and_step_bug(self):
# If a model-builder set his a component to fd, and the global method is cs with a specified
# step size, that size is probably unusable, and can lead to false error in the check.
prob = om.Problem()
prob.model.add_subsystem('p1', om.IndepVarComp('x', 3.0))
prob.model.add_subsystem('p2', om.IndepVarComp('y', 5.0))
comp = prob.model.add_subsystem('comp', Paraboloid())
prob.model.connect('p1.x', 'comp.x')
prob.model.connect('p2.y', 'comp.y')
prob.set_solver_print(level=0)
comp.set_check_partial_options(wrt='*', method='fd')
prob.setup(force_alloc_complex=True)
prob.run_model()
J = prob.check_partials(compact_print=True, method='cs', step=1e-40, out_stream=None)
assert_check_partials(J, atol=1e-5, rtol=1e-5)
class DistribParaboloid(om.ExplicitComponent):
def setup(self):
self.options['distributed'] = True
if self.comm.rank == 0:
ndvs = 3
else:
ndvs = 2
self.add_input('w', val=1.) # this will connect to a non-distributed IVC
self.add_input('x', shape=ndvs) # this will connect to a distributed IVC
self.add_output('y', shape=2) # all-gathered output, duplicated on all procs
self.add_output('z', shape=ndvs) # distributed output
self.declare_partials('y', 'x')
self.declare_partials('y', 'w')
self.declare_partials('z', 'x')
def compute(self, inputs, outputs):
x = inputs['x']
local_y = np.sum((x-5)**2)
y_g = np.zeros(self.comm.size)
self.comm.Allgather(local_y, y_g)
val = np.sum(y_g) + (inputs['w']-10)**2
outputs['y'] = np.array([val, val*3.])
outputs['z'] = x**2
def compute_partials(self, inputs, J):
x = inputs['x']
J['y', 'x'] = np.array([2*(x-5), 6*(x-5)])
J['y', 'w'] = np.array([2*(inputs['w']-10), 6*(inputs['w']-10)])
J['z', 'x'] = np.diag(2*x)
class DistribParaboloid2D(om.ExplicitComponent):
def setup(self):
comm = self.comm
rank = comm.rank
if rank == 0:
vshape = (3,2)
else:
vshape = (2,2)
self.options['distributed'] = True
self.add_input('w', val=1., src_indices=np.array([1])) # this will connect to a non-distributed IVC
self.add_input('x', shape=vshape) # this will connect to a distributed IVC
self.add_output('y') # all-gathered output, duplicated on all procs
self.add_output('z', shape=vshape) # distributed output
self.declare_partials('y', 'x')
self.declare_partials('y', 'w')
self.declare_partials('z', 'x')
def compute(self, inputs, outputs):
x = inputs['x']
local_y = np.sum((x-5)**2)
y_g = np.zeros(self.comm.size)
self.comm.Allgather(local_y, y_g)
outputs['y'] = np.sum(y_g) + (inputs['w']-10)**2
outputs['z'] = x**2
def compute_partials(self, inputs, J):
x = inputs['x'].flatten()
J['y', 'x'] = 2*(x-5)
J['y', 'w'] = 2*(inputs['w']-10)
J['z', 'x'] = np.diag(2*x)
@unittest.skipUnless(MPI and PETScVector, "MPI and PETSc are required.")
class TestProblemComputeTotalsGetRemoteFalse(unittest.TestCase):
N_PROCS = 2
def _do_compute_totals(self, mode):
comm = MPI.COMM_WORLD
p = om.Problem()
d_ivc = p.model.add_subsystem('distrib_ivc',
om.IndepVarComp(distributed=True),
promotes=['*'])
if comm.rank == 0:
ndvs = 3
else:
ndvs = 2
d_ivc.add_output('x', 2*np.ones(ndvs))
ivc = p.model.add_subsystem('ivc',
om.IndepVarComp(distributed=False),
promotes=['*'])
ivc.add_output('w', 2.0)
p.model.add_subsystem('dp', DistribParaboloid(), promotes=['*'])
p.model.add_design_var('x', lower=-100, upper=100)
p.model.add_objective('y')
p.setup(mode=mode)
p.run_model()
dv_vals = p.driver.get_design_var_values(get_remote=False)
# Compute totals and check the length of the gradient array on each proc
objcongrad = p.compute_totals(get_remote=False)
# Check the values of the gradient array
assert_near_equal(objcongrad[('dp.y', 'distrib_ivc.x')][0], -6.0*np.ones(ndvs))
assert_near_equal(objcongrad[('dp.y', 'distrib_ivc.x')][1], -18.0*np.ones(ndvs))
def test_distrib_compute_totals_fwd(self):
self._do_compute_totals('fwd')
def test_distrib_compute_totals_rev(self):
self._do_compute_totals('rev')
def _do_compute_totals_2D(self, mode):
# this test has some non-flat variables
comm = MPI.COMM_WORLD
p = om.Problem()
d_ivc = p.model.add_subsystem('distrib_ivc',
om.IndepVarComp(distributed=True),
promotes=['*'])
if comm.rank == 0:
ndvs = 6
two_d = (3,2)
else:
ndvs = 4
two_d = (2,2)
d_ivc.add_output('x', 2*np.ones(two_d))
ivc = p.model.add_subsystem('ivc',
om.IndepVarComp(distributed=False),
promotes=['*'])
ivc.add_output('w', 2.0)
p.model.add_subsystem('dp', DistribParaboloid2D(), promotes=['*'])
p.model.add_design_var('x', lower=-100, upper=100)
p.model.add_objective('y')
p.setup(mode=mode)
p.run_model()
dv_vals = p.driver.get_design_var_values(get_remote=False)
# Compute totals and check the length of the gradient array on each proc
objcongrad = p.compute_totals(get_remote=False)
# Check the values of the gradient array
assert_near_equal(objcongrad[('dp.y', 'distrib_ivc.x')][0], -6.0*np.ones(ndvs))
def test_distrib_compute_totals_2D_fwd(self):
self._do_compute_totals_2D('fwd')
def test_distrib_compute_totals_2D_rev(self):
self._do_compute_totals_2D('rev')
def _remotevar_compute_totals(self, mode):
indep_list = ['iv.x']
unknown_list = [
'c1.y1',
'c1.y2',
'sub.c2.y1',
'sub.c3.y1',
'c4.y1',
'c4.y2',
]
full_expected = {
('c1.y1', 'iv.x'): [[8.]],
('c1.y2', 'iv.x'): [[3.]],
('sub.c2.y1', 'iv.x'): [[4.]],
('sub.c3.y1', 'iv.x'): [[10.5]],
('c4.y1', 'iv.x'): [[25.]],
('c4.y2', 'iv.x'): [[-40.5]],
}
prob = om.Problem()
prob.model = Diamond()
prob.setup(mode=mode)
prob.set_solver_print(level=0)
prob.run_model()
assert_near_equal(prob['c4.y1'], 46.0, 1e-6)
assert_near_equal(prob['c4.y2'], -93.0, 1e-6)
J = prob.compute_totals(of=unknown_list, wrt=indep_list)
for key, val in full_expected.items():
assert_near_equal(J[key], val, 1e-6)
reduced_expected = {key: v for key, v in full_expected.items() if key[0] in prob.model._var_abs2meta['output']}
J = prob.compute_totals(of=unknown_list, wrt=indep_list, get_remote=False)
for key, val in reduced_expected.items():
assert_near_equal(J[key], val, 1e-6)
self.assertEqual(len(J), len(reduced_expected))
def test_remotevar_compute_totals_fwd(self):
self._remotevar_compute_totals('fwd')
def test_remotevar_compute_totals_rev(self):
self._remotevar_compute_totals('rev')
class TestProblemCheckTotals(unittest.TestCase):
def test_cs(self):
prob = om.Problem()
prob.model = SellarDerivatives()
prob.model.nonlinear_solver = om.NonlinearBlockGS()
prob.model.add_design_var('x', lower=-100, upper=100)
prob.model.add_design_var('z', lower=-100, upper=100)
prob.model.add_objective('obj')
prob.model.add_constraint('con1', upper=0.0)
prob.model.add_constraint('con2', upper=0.0)
prob.set_solver_print(level=0)
prob.setup(force_alloc_complex=True)
prob.model.nonlinear_solver.options['atol'] = 1e-15
prob.model.nonlinear_solver.options['rtol'] = 1e-15
# We don't call run_driver() here because we don't
# actually want the optimizer to run
prob.run_model()
# check derivatives with complex step and a larger step size.
stream = StringIO()
totals = prob.check_totals(method='cs', out_stream=stream)
lines = stream.getvalue().splitlines()
# Make sure auto-ivc sources are translated to promoted input names.
self.assertTrue('x' in lines[3])
self.assertTrue('9.80614' in lines[4], "'9.80614' not found in '%s'" % lines[4])
self.assertTrue('9.80614' in lines[5], "'9.80614' not found in '%s'" % lines[5])
self.assertTrue('cs:None' in lines[5], "'cs:None not found in '%s'" % lines[5])
assert_near_equal(totals['con_cmp2.con2', 'x']['J_fwd'], [[0.09692762]], 1e-5)
assert_near_equal(totals['con_cmp2.con2', 'x']['J_fd'], [[0.09692762]], 1e-5)
# Test compact_print output
compact_stream = StringIO()
compact_totals = prob.check_totals(method='fd', out_stream=compact_stream,
compact_print=True)
compact_lines = compact_stream.getvalue().splitlines()
self.assertTrue('<output>' in compact_lines[3],
"'<output>' not found in '%s'" % compact_lines[4])
self.assertTrue('9.7743e+00' in compact_lines[11],
"'9.7743e+00' not found in '%s'" % compact_lines[11])
def test_check_totals_show_progress(self):
prob = om.Problem()
prob.model = SellarDerivatives()
prob.model.nonlinear_solver = om.NonlinearBlockGS()
prob.model.add_design_var('x', lower=-100, upper=100)
prob.model.add_design_var('z', lower=-100, upper=100)
prob.model.add_objective('obj')
prob.model.add_constraint('con1', upper=0.0)
prob.model.add_constraint('con2', upper=0.0)
prob.set_solver_print(level=0)
prob.setup(force_alloc_complex=True)
prob.model.nonlinear_solver.options['atol'] = 1e-15
prob.model.nonlinear_solver.options['rtol'] = 1e-15
# We don't call run_driver() here because we don't
# actually want the optimizer to run
prob.run_model()
# check derivatives with complex step and a larger step size.
stream = StringIO()
totals = prob.check_totals(method='fd', show_progress=True, out_stream=stream)
lines = stream.getvalue().splitlines()
self.assertTrue("1/3: Checking derivatives with respect to: 'd1.z [0]' ..." in lines[0])
self.assertTrue("2/3: Checking derivatives with respect to: 'd1.z [1]' ..." in lines[1])
self.assertTrue("3/3: Checking derivatives with respect to: 'd1.x [2]' ..." in lines[2])
prob.run_model()
# Check to make sure nothing is going to output
stream = StringIO()
totals = prob.check_totals(method='fd', show_progress=False, out_stream=stream)
lines = stream.getvalue()
self.assertFalse("Checking derivatives with respect to" in lines)
prob.check_totals(method='fd', show_progress=True)
def test_desvar_as_obj(self):
prob = om.Problem()
prob.model = SellarDerivatives()
prob.model.nonlinear_solver = om.NonlinearBlockGS()
prob.model.add_design_var('x', lower=-100, upper=100)
prob.model.add_objective('x')
prob.set_solver_print(level=0)
prob.setup(force_alloc_complex=True)
# We don't call run_driver() here because we don't
# actually want the optimizer to run
prob.run_model()
# check derivatives with complex step and a larger step size.
stream = StringIO()
totals = prob.check_totals(method='cs', out_stream=stream)
lines = stream.getvalue().splitlines()
self.assertTrue('1.000' in lines[4])
self.assertTrue('1.000' in lines[5])
self.assertTrue('0.000' in lines[6])
self.assertTrue('0.000' in lines[8])
assert_near_equal(totals['x', 'x']['J_fwd'], [[1.0]], 1e-5)
assert_near_equal(totals['x', 'x']['J_fd'], [[1.0]], 1e-5)
def test_desvar_and_response_with_indices(self):
class ArrayComp2D(om.ExplicitComponent):
"""
A fairly simple array component.
"""
def setup(self):
self.JJ = np.array([[1.0, 3.0, -2.0, 7.0],
[6.0, 2.5, 2.0, 4.0],
[-1.0, 0.0, 8.0, 1.0],
[1.0, 4.0, -5.0, 6.0]])
# Params
self.add_input('x1', np.zeros([4]))
# Unknowns
self.add_output('y1', np.zeros([4]))
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
"""
Execution.
"""
outputs['y1'] = self.JJ.dot(inputs['x1'])
def compute_partials(self, inputs, partials):
"""
Analytical derivatives.
"""
partials[('y1', 'x1')] = self.JJ
prob = om.Problem()
model = prob.model
model.add_subsystem('x_param1', om.IndepVarComp('x1', np.ones((4))),
promotes=['x1'])
mycomp = model.add_subsystem('mycomp', ArrayComp2D(), promotes=['x1', 'y1'])
model.add_design_var('x1', indices=[1, 3])
model.add_constraint('y1', indices=[0, 2])
prob.set_solver_print(level=0)
prob.setup(check=False, mode='fwd')
prob.run_model()
Jbase = mycomp.JJ
of = ['y1']
wrt = ['x1']
J = prob.compute_totals(of=of, wrt=wrt, return_format='flat_dict')
assert_near_equal(J['y1', 'x1'][0][0], Jbase[0, 1], 1e-8)
assert_near_equal(J['y1', 'x1'][0][1], Jbase[0, 3], 1e-8)
assert_near_equal(J['y1', 'x1'][1][0], Jbase[2, 1], 1e-8)
assert_near_equal(J['y1', 'x1'][1][1], Jbase[2, 3], 1e-8)
totals = prob.check_totals()
jac = totals[('mycomp.y1', 'x_param1.x1')]['J_fd']
assert_near_equal(jac[0][0], Jbase[0, 1], 1e-8)
assert_near_equal(jac[0][1], Jbase[0, 3], 1e-8)
assert_near_equal(jac[1][0], Jbase[2, 1], 1e-8)
assert_near_equal(jac[1][1], Jbase[2, 3], 1e-8)
# Objective instead
prob = om.Problem()
model = prob.model
model.add_subsystem('x_param1', om.IndepVarComp('x1', np.ones((4))),
promotes=['x1'])
mycomp = model.add_subsystem('mycomp', ArrayComp2D(), promotes=['x1', 'y1'])
model.add_design_var('x1', indices=[1, 3])
model.add_objective('y1', index=1)
prob.set_solver_print(level=0)
prob.setup(check=False, mode='fwd')
prob.run_model()
Jbase = mycomp.JJ
of = ['y1']
wrt = ['x1']
J = prob.compute_totals(of=of, wrt=wrt, return_format='flat_dict')
assert_near_equal(J['y1', 'x1'][0][0], Jbase[1, 1], 1e-8)
assert_near_equal(J['y1', 'x1'][0][1], Jbase[1, 3], 1e-8)
totals = prob.check_totals()
jac = totals[('mycomp.y1', 'x_param1.x1')]['J_fd']
assert_near_equal(jac[0][0], Jbase[1, 1], 1e-8)
assert_near_equal(jac[0][1], Jbase[1, 3], 1e-8)
def test_cs_suppress(self):
prob = om.Problem()
prob.model = SellarDerivatives()
prob.model.nonlinear_solver = om.NonlinearBlockGS()
prob.model.add_design_var('x', lower=-100, upper=100)
prob.model.add_design_var('z', lower=-100, upper=100)
prob.model.add_objective('obj')
prob.model.add_constraint('con1', upper=0.0)
prob.model.add_constraint('con2', upper=0.0)
prob.set_solver_print(level=0)
prob.setup(force_alloc_complex=True)
# We don't call run_driver() here because we don't
# actually want the optimizer to run
prob.run_model()
# check derivatives with complex step and a larger step size.
totals = prob.check_totals(method='cs', out_stream=None)
data = totals['con_cmp2.con2', 'x']
self.assertTrue('J_fwd' in data)
self.assertTrue('rel error' in data)
self.assertTrue('abs error' in data)
self.assertTrue('magnitude' in data)
def test_two_desvar_as_con(self):
prob = om.Problem()
prob.model = SellarDerivatives()
prob.model.nonlinear_solver = om.NonlinearBlockGS()
prob.model.add_design_var('z', lower=-100, upper=100)
prob.model.add_design_var('x', lower=-100, upper=100)
prob.model.add_constraint('x', upper=0.0)
prob.model.add_constraint('z', upper=0.0)
prob.set_solver_print(level=0)
prob.setup()
# We don't call run_driver() here because we don't
# actually want the optimizer to run
prob.run_model()
totals = prob.check_totals(method='fd', step=1.0e-1, out_stream=None)
assert_near_equal(totals['x', 'x']['J_fwd'], [[1.0]], 1e-5)
assert_near_equal(totals['x', 'x']['J_fd'], [[1.0]], 1e-5)
assert_near_equal(totals['z', 'z']['J_fwd'], np.eye(2), 1e-5)
assert_near_equal(totals['z', 'z']['J_fd'], np.eye(2), 1e-5)
assert_near_equal(totals['x', 'z']['J_fwd'], [[0.0, 0.0]], 1e-5)
assert_near_equal(totals['x', 'z']['J_fd'], [[0.0, 0.0]], 1e-5)
assert_near_equal(totals['z', 'x']['J_fwd'], [[0.0], [0.0]], 1e-5)
assert_near_equal(totals['z', 'x']['J_fd'], [[0.0], [0.0]], 1e-5)
def test_full_con_with_index_desvar(self):
prob = om.Problem()
prob.model = SellarDerivatives()
prob.model.nonlinear_solver = om.NonlinearBlockGS()
prob.model.add_design_var('z', lower=-100, upper=100, indices=[1])
prob.model.add_constraint('z', upper=0.0)
prob.set_solver_print(level=0)
prob.setup()
# We don't call run_driver() here because we don't
# actually want the optimizer to run
prob.run_model()
totals = prob.check_totals(method='fd', step=1.0e-1, out_stream=None)
assert_near_equal(totals['z', 'z']['J_fwd'], [[0.0], [1.0]], 1e-5)
assert_near_equal(totals['z', 'z']['J_fd'], [[0.0], [1.0]], 1e-5)
def test_full_desvar_with_index_con(self):
prob = om.Problem()
prob.model = SellarDerivatives()
prob.model.nonlinear_solver = om.NonlinearBlockGS()
prob.model.add_design_var('z', lower=-100, upper=100)
prob.model.add_constraint('z', upper=0.0, indices=[1])
prob.set_solver_print(level=0)
prob.setup()
# We don't call run_driver() here because we don't
# actually want the optimizer to run
prob.run_model()
totals = prob.check_totals(method='fd', step=1.0e-1, out_stream=None)
assert_near_equal(totals['z', 'z']['J_fwd'], [[0.0, 1.0]], 1e-5)
assert_near_equal(totals['z', 'z']['J_fd'], [[0.0, 1.0]], 1e-5)
def test_full_desvar_with_index_obj(self):
prob = om.Problem()
prob.model = SellarDerivatives()
prob.model.nonlinear_solver = om.NonlinearBlockGS()
prob.model.add_design_var('z', lower=-100, upper=100)
prob.model.add_objective('z', index=1)
prob.set_solver_print(level=0)
prob.setup()
# We don't call run_driver() here because we don't
# actually want the optimizer to run
prob.run_model()
totals = prob.check_totals(method='fd', step=1.0e-1, out_stream=None)
assert_near_equal(totals['z', 'z']['J_fwd'], [[0.0, 1.0]], 1e-5)
assert_near_equal(totals['z', 'z']['J_fd'], [[0.0, 1.0]], 1e-5)
def test_bug_fd_with_sparse(self):
# This bug was found via the x57 model in pointer.
class TimeComp(om.ExplicitComponent):
def setup(self):
self.node_ptau = node_ptau = np.array([-1., 0., 1.])
self.add_input('t_duration', val=1.)
self.add_output('time', shape=len(node_ptau))
# Setup partials
nn = 3
rs = np.arange(nn)
cs = np.zeros(nn)
self.declare_partials(of='time', wrt='t_duration', rows=rs, cols=cs, val=1.0)
def compute(self, inputs, outputs):
node_ptau = self.node_ptau
t_duration = inputs['t_duration']
outputs['time'][:] = 0.5 * (node_ptau + 33) * t_duration
def compute_partials(self, inputs, jacobian):
node_ptau = self.node_ptau
jacobian['time', 't_duration'] = 0.5 * (node_ptau + 33)
class CellComp(om.ExplicitComponent):
def initialize(self):
self.options.declare('num_nodes', types=int)
def setup(self):
n = self.options['num_nodes']
self.add_input('I_Li', val=3.25*np.ones(n))
self.add_output('zSOC', val=np.ones(n))
# Partials
ar = np.arange(n)
self.declare_partials(of='zSOC', wrt='I_Li', rows=ar, cols=ar)
def compute(self, inputs, outputs):
I_Li = inputs['I_Li']
outputs['zSOC'] = -I_Li / (3600.0)
def compute_partials(self, inputs, partials):
partials['zSOC', 'I_Li'] = -1./(3600.0)
class GaussLobattoPhase(om.Group):
def setup(self):
self.connect('t_duration', 'time.t_duration')
indep = om.IndepVarComp()
indep.add_output('t_duration', val=1.0)
self.add_subsystem('time_extents', indep, promotes_outputs=['*'])
self.add_design_var('t_duration', 5.0, 25.0)
time_comp = TimeComp()
self.add_subsystem('time', time_comp, promotes_outputs=['time'])
self.add_subsystem(name='cell', subsys=CellComp(num_nodes=3))
self.linear_solver = om.ScipyKrylov()
self.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
self.nonlinear_solver.options['maxiter'] = 1
def initialize(self):
self.options.declare('ode_class', desc='System defining the ODE.')
p = om.Problem(model=GaussLobattoPhase())
p.model.add_objective('time', index=-1)
p.model.linear_solver = om.ScipyKrylov(assemble_jac=True)
p.setup(mode='fwd')
p.set_solver_print(level=0)
p.run_model()
# Make sure we don't bomb out with an error.
J = p.check_totals(out_stream=None)
assert_near_equal(J[('time.time', 'time_extents.t_duration')]['J_fwd'][0], 17.0, 1e-5)
assert_near_equal(J[('time.time', 'time_extents.t_duration')]['J_fd'][0], 17.0, 1e-5)
# Try again with a direct solver and sparse assembled hierarchy.
p = om.Problem()
p.model.add_subsystem('sub', GaussLobattoPhase())
p.model.sub.add_objective('time', index=-1)
p.model.linear_solver = om.DirectSolver(assemble_jac=True)
p.setup(mode='fwd')
p.set_solver_print(level=0)
p.run_model()
# Make sure we don't bomb out with an error.
J = p.check_totals(out_stream=None)
assert_near_equal(J[('sub.time.time', 'sub.time_extents.t_duration')]['J_fwd'][0], 17.0, 1e-5)
assert_near_equal(J[('sub.time.time', 'sub.time_extents.t_duration')]['J_fd'][0], 17.0, 1e-5)
# Make sure check_totals cleans up after itself by running it a second time.
J = p.check_totals(out_stream=None)
assert_near_equal(J[('sub.time.time', 'sub.time_extents.t_duration')]['J_fwd'][0], 17.0, 1e-5)
assert_near_equal(J[('sub.time.time', 'sub.time_extents.t_duration')]['J_fd'][0], 17.0, 1e-5)
def test_vector_scaled_derivs(self):
prob = om.Problem()
model = prob.model
model.add_subsystem('px', om.IndepVarComp(name="x", val=np.ones((2, ))))
comp = model.add_subsystem('comp', DoubleArrayComp())
model.connect('px.x', 'comp.x1')
model.add_design_var('px.x', ref=np.array([2.0, 3.0]), ref0=np.array([0.5, 1.5]))
model.add_objective('comp.y1', ref=np.array([[7.0, 11.0]]), ref0=np.array([5.2, 6.3]))
model.add_constraint('comp.y2', lower=0.0, upper=1.0,
ref=np.array([[2.0, 4.0]]), ref0=np.array([1.2, 2.3]))
prob.setup()
prob.run_driver()
# First, test that we get scaled results in compute and check totals.
derivs = prob.compute_totals(of=['comp.y1'], wrt=['px.x'], return_format='dict',
driver_scaling=True)
oscale = np.array([1.0/(7.0-5.2), 1.0/(11.0-6.3)])
iscale = np.array([2.0-0.5, 3.0-1.5])
J = np.zeros((2, 2))
J[:] = comp.JJ[0:2, 0:2]
# doing this manually so that I don't inadvertantly make an error in
# the vector math in both the code and test.
J[0, 0] *= oscale[0]*iscale[0]
J[0, 1] *= oscale[0]*iscale[1]
J[1, 0] *= oscale[1]*iscale[0]
J[1, 1] *= oscale[1]*iscale[1]
assert_near_equal(J, derivs['comp.y1']['px.x'], 1.0e-3)
cderiv = prob.check_totals(driver_scaling=True, out_stream=None)
assert_near_equal(cderiv['comp.y1', 'px.x']['J_fwd'], J, 1.0e-3)
# cleanup after FD
prob.run_model()
# Now, test that default is unscaled.
derivs = prob.compute_totals(of=['comp.y1'], wrt=['px.x'], return_format='dict')
J = comp.JJ[0:2, 0:2]
assert_near_equal(J, derivs['comp.y1']['px.x'], 1.0e-3)
cderiv = prob.check_totals(out_stream=None)
assert_near_equal(cderiv['comp.y1', 'px.x']['J_fwd'], J, 1.0e-3)
def test_cs_around_newton(self):
# Basic sellar test.
prob = om.Problem()
model = prob.model
sub = model.add_subsystem('sub', om.Group(), promotes=['*'])
model.add_subsystem('px', om.IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', om.IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])
sub.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
sub.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])
model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])
model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])
sub.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
sub.linear_solver = om.DirectSolver(assemble_jac=False)
# Need this.
model.linear_solver = om.LinearBlockGS()
prob.model.add_design_var('x', lower=-100, upper=100)
prob.model.add_design_var('z', lower=-100, upper=100)
prob.model.add_objective('obj')
prob.model.add_constraint('con1', upper=0.0)
prob.model.add_constraint('con2', upper=0.0)
prob.setup(check=False, force_alloc_complex=True)
prob.set_solver_print(level=0)
prob.run_model()
totals = prob.check_totals(method='cs', out_stream=None)
for key, val in totals.items():
assert_near_equal(val['rel error'][0], 0.0, 1e-10)
def test_cs_around_broyden(self):
# Basic sellar test.
prob = om.Problem()
model = prob.model
sub = model.add_subsystem('sub', om.Group(), promotes=['*'])
model.add_subsystem('px', om.IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', om.IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])
sub.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
sub.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])
model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])
model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])
sub.nonlinear_solver = om.BroydenSolver()
sub.linear_solver = om.DirectSolver()
# Need this.
model.linear_solver = om.LinearBlockGS()
prob.model.add_design_var('x', lower=-100, upper=100)
prob.model.add_design_var('z', lower=-100, upper=100)
prob.model.add_objective('obj')
prob.model.add_constraint('con1', upper=0.0)
prob.model.add_constraint('con2', upper=0.0)
prob.setup(check=False, force_alloc_complex=True)
prob.set_solver_print(level=0)
prob.run_model()
totals = prob.check_totals(method='cs', out_stream=None)
for key, val in totals.items():
assert_near_equal(val['rel error'][0], 0.0, 1e-6)
def test_cs_around_newton_top_sparse(self):
prob = om.Problem()
prob.model = SellarDerivatives()
prob.setup(force_alloc_complex=True)
prob.model.nonlinear_solver = om.NewtonSolver(solve_subsystems=True)
prob.model.linear_solver = om.DirectSolver(assemble_jac=True)
prob.run_model()
totals = prob.check_totals(of=['obj', 'con1'], wrt=['x', 'z'], method='cs', out_stream=None)
for key, val in totals.items():
assert_near_equal(val['rel error'][0], 0.0, 3e-8)
def test_cs_around_broyden_top_sparse(self):
prob = om.Problem()
prob.model = SellarDerivatives()
prob.setup(force_alloc_complex=True)
prob.model.nonlinear_solver = om.BroydenSolver()
prob.model.linear_solver = om.DirectSolver(assemble_jac=True)
prob.run_model()
totals = prob.check_totals(of=['obj', 'con1'], wrt=['x', 'z'], method='cs', out_stream=None)
for key, val in totals.items():
assert_near_equal(val['rel error'][0], 0.0, 7e-8)
def test_check_totals_on_approx_model(self):
prob = om.Problem()
prob.model = SellarDerivatives()
prob.model.approx_totals(method='cs')
prob.setup(force_alloc_complex=True)
prob.model.nonlinear_solver = om.NewtonSolver(solve_subsystems=True)
prob.model.linear_solver = om.DirectSolver()
prob.run_model()
totals = prob.check_totals(of=['obj', 'con1'], wrt=['x', 'z'], method='cs', out_stream=None)
for key, val in totals.items():
assert_near_equal(val['rel error'][0], 0.0, 3e-8)
def test_cs_error_allocate(self):
prob = om.Problem()
model = prob.model
model.add_subsystem('p', om.IndepVarComp('x', 3.0), promotes=['*'])
model.add_subsystem('comp', ParaboloidTricky(), promotes=['*'])
prob.setup()
prob.run_model()
with self.assertRaises(RuntimeError) as cm:
prob.check_totals(method='cs')
msg = "\nProblem: To enable complex step, specify 'force_alloc_complex=True' when calling " + \
"setup on the problem, e.g. 'problem.setup(force_alloc_complex=True)'"
self.assertEqual(str(cm.exception), msg)
def test_fd_zero_check(self):
class BadComp(om.ExplicitComponent):
def setup(self):
self.add_input('x', 3.0)
self.add_output('y', 3.0)
self.declare_partials('y', 'x')
def compute(self, inputs, outputs):
pass
def compute_partials(self, inputs, partials):
partials['y', 'x'] = 3.0 * inputs['x'] + 5
prob = om.Problem()
model = prob.model
model.add_subsystem('p', om.IndepVarComp('x', 3.0))
model.add_subsystem('comp', BadComp())
model.connect('p.x', 'comp.x')
model.add_design_var('p.x')
model.add_objective('comp.y')
prob.setup()
prob.run_model()
# This test verifies fix of a TypeError (division by None)
J = prob.check_totals(out_stream=None)
assert_near_equal(J['comp.y', 'p.x']['J_fwd'], [[14.0]], 1e-6)
assert_near_equal(J['comp.y', 'p.x']['J_fd'], [[0.0]], 1e-6)
def test_response_index(self):
prob = om.Problem()
model = prob.model
model.add_subsystem('p', om.IndepVarComp('x', np.ones(2)), promotes=['*'])
model.add_subsystem('comp', om.ExecComp('y=2*x', x=np.ones(2), y=np.ones(2)),
promotes=['*'])
model.add_design_var('x')
model.add_constraint('y', indices=[1], lower=0.0)
prob.setup()
prob.run_model()
stream = StringIO()
prob.check_totals(out_stream=stream)
lines = stream.getvalue().splitlines()
self.assertTrue('index size: 1' in lines[3])
def test_linear_cons(self):
# Linear constraints were mistakenly forgotten.
p = om.Problem()
p.model.add_subsystem('stuff', om.ExecComp(['y = x', 'cy = x', 'lcy = 3*x'],
x={'units': 'inch'},
y={'units': 'kg'},
lcy={'units': 'kg'}),
promotes=['*'])
p.model.add_design_var('x', units='ft')
p.model.add_objective('y', units='lbm')
p.model.add_constraint('lcy', units='lbm', lower=0, linear=True)
p.setup()
p['x'] = 1.0
p.run_model()
stream = StringIO()
J_driver = p.check_totals(out_stream=stream)
lines = stream.getvalue().splitlines()
self.assertTrue("Full Model: 'stuff.lcy' wrt 'x' (Linear constraint)" in lines[3])
self.assertTrue("Absolute Error (Jan - Jfd)" in lines[6])
self.assertTrue("Relative Error (Jan - Jfd) / Jfd" in lines[8])
assert_near_equal(J_driver['stuff.y', 'x']['J_fwd'][0, 0], 1.0)
assert_near_equal(J_driver['stuff.lcy', 'x']['J_fwd'][0, 0], 3.0)
@unittest.skipUnless(MPI and PETScVector, "MPI and PETSc are required.")
class TestProblemCheckTotalsMPI(unittest.TestCase):
N_PROCS = 2
def test_indepvarcomp_under_par_sys(self):
prob = om.Problem()
prob.model = FanInSubbedIDVC()
prob.setup(check=False, mode='rev')
prob.set_solver_print(level=0)
prob.run_model()
J = prob.check_totals(out_stream=None)
assert_near_equal(J['sum.y', 'sub.sub1.p1.x']['J_fwd'], [[2.0]], 1.0e-6)
assert_near_equal(J['sum.y', 'sub.sub2.p2.x']['J_fwd'], [[4.0]], 1.0e-6)
assert_near_equal(J['sum.y', 'sub.sub1.p1.x']['J_fd'], [[2.0]], 1.0e-6)
assert_near_equal(J['sum.y', 'sub.sub2.p2.x']['J_fd'], [[4.0]], 1.0e-6)
if __name__ == "__main__":
unittest.main()
| 36.935117 | 131 | 0.568998 | 15,534 | 120,113 | 4.250676 | 0.049569 | 0.036801 | 0.044025 | 0.033394 | 0.824928 | 0.786748 | 0.746888 | 0.715902 | 0.686809 | 0.661442 | 0 | 0.031113 | 0.282051 | 120,113 | 3,251 | 132 | 36.946478 | 0.734586 | 0.06518 | 0 | 0.67844 | 0 | 0.002294 | 0.083774 | 0.00531 | 0 | 0 | 0 | 0 | 0.127523 | 1 | 0.091284 | false | 0.002752 | 0.022936 | 0 | 0.134404 | 0.04633 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4af0c8581b471dd7b6667c74e6ca71bf583c5e02 | 54 | py | Python | music_manager/__init__.py | cedi4155476/musicmanager | f430e92ba85ef7fb77f0d688a094e1efd7feeda3 | [
"MIT"
] | 1 | 2015-08-07T14:08:13.000Z | 2015-08-07T14:08:13.000Z | music_manager/__init__.py | cedi4155476/musicmanager | f430e92ba85ef7fb77f0d688a094e1efd7feeda3 | [
"MIT"
] | null | null | null | music_manager/__init__.py | cedi4155476/musicmanager | f430e92ba85ef7fb77f0d688a094e1efd7feeda3 | [
"MIT"
] | null | null | null | def main():
import __main__
__main__.launch()
| 13.5 | 21 | 0.648148 | 6 | 54 | 4.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.240741 | 54 | 3 | 22 | 18 | 0.658537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
ab01bab5b6eaf36a45a9cb29033ab833975955f0 | 864 | py | Python | migration/migrator/migrations/system/20200720000000_inprogressgrading.py | zeez2030/Submitty | 7118944ff4adc6f15d76984eb10a1e862926d724 | [
"BSD-3-Clause"
] | 411 | 2016-06-14T20:52:25.000Z | 2022-03-31T21:20:25.000Z | migration/migrator/migrations/system/20200720000000_inprogressgrading.py | KaelanWillauer/Submitty | cf9b6ceda15ec0a661e2ca81ea7864790094c64a | [
"BSD-3-Clause"
] | 5,730 | 2016-05-23T21:04:32.000Z | 2022-03-31T10:08:06.000Z | migration/migrator/migrations/system/20200720000000_inprogressgrading.py | KaelanWillauer/Submitty | cf9b6ceda15ec0a661e2ca81ea7864790094c64a | [
"BSD-3-Clause"
] | 423 | 2016-09-22T21:11:30.000Z | 2022-03-29T18:55:28.000Z | import os
import shutil
from pathlib import Path
def up(config):
before = Path(config.submitty['submitty_data_dir'], 'grading')
after = Path(config.submitty['submitty_data_dir'], 'in_progress_grading')
if not os.path.isdir(before):
raise SystemExit("ERROR: grading directory does not exist")
if os.path.isdir(after):
raise SystemExit("ERROR: in_progress_grading directory already exists")
shutil.move(before,after)
def down(config):
before = Path(config.submitty['submitty_data_dir'], 'in_progress_grading')
after = Path(config.submitty['submitty_data_dir'], 'grading')
if not os.path.isdir(before):
raise SystemExit("ERROR: in_progress_grading directory does not exist")
if os.path.isdir(after):
raise SystemExit("ERROR: grading directory already exists")
shutil.move(before,after)
| 29.793103 | 79 | 0.716435 | 114 | 864 | 5.289474 | 0.263158 | 0.066335 | 0.119403 | 0.172471 | 0.912106 | 0.912106 | 0.912106 | 0.878939 | 0.535655 | 0.381426 | 0 | 0 | 0.170139 | 864 | 28 | 80 | 30.857143 | 0.841004 | 0 | 0 | 0.315789 | 0 | 0 | 0.347625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.157895 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ab0bf10379e1649c56667d4796ac3b69b41224b0 | 74 | py | Python | PYTHON_POO/AUumalinha.py | davihonorato/Curso-python | 47e6b4b2f5b37ef520b8b31d37dba0b5d259a0b0 | [
"MIT"
] | null | null | null | PYTHON_POO/AUumalinha.py | davihonorato/Curso-python | 47e6b4b2f5b37ef520b8b31d37dba0b5d259a0b0 | [
"MIT"
] | null | null | null | PYTHON_POO/AUumalinha.py | davihonorato/Curso-python | 47e6b4b2f5b37ef520b8b31d37dba0b5d259a0b0 | [
"MIT"
] | null | null | null | """Docstring de uma linha"""
variavel = 'valor'
def um():
return 1
| 9.25 | 28 | 0.594595 | 10 | 74 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 0.243243 | 74 | 7 | 29 | 10.571429 | 0.767857 | 0.297297 | 0 | 0 | 0 | 0 | 0.108696 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
ab55155ac790f3ca88b20489b801dcfec0af8e4e | 170 | py | Python | api/game_api/admin.py | dgnsrekt/Spotifyle | 9ef10dc53d3117c1a0532c051fa41cc08fcfe4d4 | [
"MIT"
] | null | null | null | api/game_api/admin.py | dgnsrekt/Spotifyle | 9ef10dc53d3117c1a0532c051fa41cc08fcfe4d4 | [
"MIT"
] | null | null | null | api/game_api/admin.py | dgnsrekt/Spotifyle | 9ef10dc53d3117c1a0532c051fa41cc08fcfe4d4 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Choice, Game, Stage
# Register your models here.
admin.register(Game)
admin.register(Stage)
admin.register(Choice)
| 18.888889 | 39 | 0.794118 | 24 | 170 | 5.625 | 0.5 | 0.288889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 170 | 8 | 40 | 21.25 | 0.9 | 0.152941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
db4366d1f6391e7c966f07880b8e0c8e1697e871 | 93 | py | Python | doclib/cli/commands/__init__.py | mark-ep/doclib | 17b3ae2036d744a5224172e62e53307a3b69677e | [
"MIT"
] | null | null | null | doclib/cli/commands/__init__.py | mark-ep/doclib | 17b3ae2036d744a5224172e62e53307a3b69677e | [
"MIT"
] | 4 | 2017-08-13T11:06:33.000Z | 2017-08-13T11:16:16.000Z | doclib/cli/commands/__init__.py | mark-ep/doclib | 17b3ae2036d744a5224172e62e53307a3b69677e | [
"MIT"
] | null | null | null | from .project import Project
from .category import Category
# from .document import Document
| 23.25 | 32 | 0.817204 | 12 | 93 | 6.333333 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139785 | 93 | 3 | 33 | 31 | 0.95 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
db4db68f0cd3156dc428aabde244fadf4bd81a81 | 196 | py | Python | bento/__init__.py | dereklarson/bento | dd2443e292430fcc8f6e94400e7f91617f689cd7 | [
"MIT"
] | 7 | 2020-07-27T19:21:26.000Z | 2021-08-29T11:38:42.000Z | bento/__init__.py | dereklarson/bento | dd2443e292430fcc8f6e94400e7f91617f689cd7 | [
"MIT"
] | 11 | 2020-07-27T02:10:17.000Z | 2020-09-04T03:17:42.000Z | bento/__init__.py | dereklarson/bento | dd2443e292430fcc8f6e94400e7f91617f689cd7 | [
"MIT"
] | 1 | 2020-10-03T07:25:34.000Z | 2020-10-03T07:25:34.000Z | # __init__.py is needed for Jinja PackageLoader
from ._version import __version__ # noqa
from .component import Component # noqa
from .bank import Bank # noqa
from .bento import Bento # noqa
| 28 | 47 | 0.765306 | 27 | 196 | 5.222222 | 0.518519 | 0.170213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183673 | 196 | 6 | 48 | 32.666667 | 0.88125 | 0.331633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
db6084f2b9c3a83d7bc086d2280260e4d2d3804d | 187 | py | Python | tests/answers_to_yaml_test.py | tamsin-mehew/Brainlabs-YAML-Generator | bd3698b733393750322157ed44d43d337e4fd69b | [
"MIT"
] | null | null | null | tests/answers_to_yaml_test.py | tamsin-mehew/Brainlabs-YAML-Generator | bd3698b733393750322157ed44d43d337e4fd69b | [
"MIT"
] | 8 | 2019-10-24T10:08:21.000Z | 2021-11-29T11:39:51.000Z | tests/answers_to_yaml_test.py | tamsin-mehew/Brainlabs-YAML-Generator | bd3698b733393750322157ed44d43d337e4fd69b | [
"MIT"
] | 3 | 2019-10-17T15:17:02.000Z | 2020-09-24T18:10:24.000Z | from blyaml.answers_to_yaml import yaml_str
def yaml_str_test() -> None:
assert yaml_str("true") is True
assert yaml_str("false") is False
assert yaml_str("null") == "null"
| 23.375 | 43 | 0.700535 | 30 | 187 | 4.1 | 0.5 | 0.284553 | 0.317073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 187 | 7 | 44 | 26.714286 | 0.803922 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
db7bf2d95a476557fb2d8783350fb95b6491a168 | 102 | py | Python | sphinx_scality/directives/__init__.py | scality/sphinx_scality | 278b45aedc7ebbd0689cc792f0531cc6d1038ad3 | [
"Apache-2.0"
] | 1 | 2020-06-18T06:38:14.000Z | 2020-06-18T06:38:14.000Z | sphinx_scality/directives/__init__.py | scality/sphinx_scality | 278b45aedc7ebbd0689cc792f0531cc6d1038ad3 | [
"Apache-2.0"
] | 23 | 2019-07-26T15:59:07.000Z | 2021-12-10T14:59:47.000Z | sphinx_scality/directives/__init__.py | scality/sphinx_scality | 278b45aedc7ebbd0689cc792f0531cc6d1038ad3 | [
"Apache-2.0"
] | null | null | null | from . import command
from . import copy
def setup(app):
command.setup(app)
copy.setup(app)
| 12.75 | 22 | 0.676471 | 15 | 102 | 4.6 | 0.466667 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215686 | 102 | 7 | 23 | 14.571429 | 0.8625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
db82317b49dd9f667e38ead7d397aa731561d640 | 92 | wsgi | Python | dirToPod.wsgi | TheRealFalcon/dirToPod | 3b34e55b4aee845619bedc7a582be1aaa43dedde | [
"Apache-2.0"
] | null | null | null | dirToPod.wsgi | TheRealFalcon/dirToPod | 3b34e55b4aee845619bedc7a582be1aaa43dedde | [
"Apache-2.0"
] | null | null | null | dirToPod.wsgi | TheRealFalcon/dirToPod | 3b34e55b4aee845619bedc7a582be1aaa43dedde | [
"Apache-2.0"
] | null | null | null | import sys
sys.path.insert(0, '/var/www/dirToPod/')
from server import app as application
| 15.333333 | 40 | 0.75 | 15 | 92 | 4.6 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0125 | 0.130435 | 92 | 5 | 41 | 18.4 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0.197802 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
db8ca9285a8490ab95698bca93e8cf61afecb5ff | 9,383 | py | Python | NeuralNetwork.py | gavalle94/NN-demo | 90b65b2aa15ca19de570a3b99976906ed5533f67 | [
"MIT"
] | null | null | null | NeuralNetwork.py | gavalle94/NN-demo | 90b65b2aa15ca19de570a3b99976906ed5533f67 | [
"MIT"
] | 1 | 2018-05-07T21:36:00.000Z | 2018-05-07T21:36:00.000Z | NeuralNetwork.py | gavalle94/NN-demo | 90b65b2aa15ca19de570a3b99976906ed5533f67 | [
"MIT"
] | null | null | null | import time
import random
from math import ceil
import numpy as np
from copy import deepcopy
from utils import *
from transfer_functions import *
class NeuralNetwork(object):
'''
3 layers NN: input, hidden and output
'''
def __init__(self, input_layer_size, hidden_layer_size, output_layer_size, transfer_f=sigmoid, transfer_df=dsigmoid):
"""
input_layer_size: number of input neurons
hidden_layer_size: number of hidden neurons
output_layer_size: number of output neurons
iterations: number of iterations
learning_rate: initial learning rate
"""
# initialize transfer functions
self.transfer_f = transfer_f
self.transfer_df = transfer_df
# initialize layer sizes
self.input_layer_size = input_layer_size+1 # +1 for the bias node in the input Layer
self.hidden_layer_size = hidden_layer_size+1 # +1 for the bias node in the hidden layer
self.output_layer_size = output_layer_size
# create randomized weights Yann LeCun method in 1988's paper ( Default values)
self.weights_init()
def weights_init(self,wi=None,wo=None):
'''
Initialize the weight matrixes
'''
input_range = 1.0 / self.input_layer_size ** (1/2)
if wi is not None:
self.W_input_to_hidden = deepcopy(wi) # weights between input and hidden layers
else:
self.W_input_to_hidden = np.random.normal(loc = 0, scale = input_range, size =(self.input_layer_size, self.hidden_layer_size-1))
if wo is not None:
self.W_hidden_to_output = deepcopy(wo) # weights between hidden and output layers
else:
self.W_hidden_to_output = np.random.uniform(size = (self.hidden_layer_size, self.output_layer_size)) / np.sqrt(self.hidden_layer_size)
def train(self, data, validation_data, iterations=300, learning_rate=5.0, batch_size=None):
'''
Train the NN over a dataset: loss function = MSE
'''
# check the batch_size value
if(batch_size is None):
# Gradient descent
batch_size = len(data[0])
if(batch_size <= 0):
raise ValueError('batch size value is not correct')
# we want to keep the time needed for the training
start_time = time.time()
# reset variables
training_accuracies = []
validation_accuracies = []
errors = []
# transpose the dataset: will be useful later
dataset = np.transpose(data).tolist()
# initialize best results variables
best_val_acc = None
best_i2h_W = self.W_input_to_hidden
best_h2o_W = self.W_hidden_to_output
# number of batches
n_batch = int(len(data[0])/batch_size + 0.5)
# iterations over the training set
for it in range(iterations):
# change the order of the samples
random.shuffle(dataset)
# retrieve features and labels
inputs = [x[0] for x in dataset]
targets = [x[1] for x in dataset]
# iterations over all the batches
for i in range(0, len(inputs), batch_size):
# reset NN output matrix
self.o_output = np.ones((batch_size, self.output_layer_size))
# define the current batch
end = min(i+batch_size, len(inputs))
batch_inputs = inputs[i : end]
batch_targets = targets[i : end]
# feedforward step
self.feedforward(batch_inputs)
# backpropagation step
self.backpropagate(batch_targets, learning_rate=learning_rate)
# compute the error on the current batch
error = np.square(batch_targets - self.o_output)
# compute accuracies, to be printed for the final graph
training_accuracies.append(100*self.predict(data)/len(data[0]))
validation_accuracies.append(100*self.predict(validation_data)/len(validation_data[0]))
# ...better results obtained?
if best_val_acc is None or validation_accuracies[-1] > best_val_acc:
best_i2h_W = self.W_input_to_hidden
best_h2o_W = self.W_hidden_to_output
best_val_acc = validation_accuracies[-1]
# save best results as parameters of the NN
self.W_input_to_hidden = best_i2h_W
self.W_hidden_to_output = best_h2o_W
# display obtained results
print("Training time:", time.time()-start_time)
plot_train_val(t=range(1, iterations+1), st=training_accuracies, sv=validation_accuracies, hn=self.hidden_layer_size-1, lr=learning_rate)
def train_xe(self, data, validation_data, iterations=300, learning_rate=5.0, batch_size=None):
'''
Train the NN over a dataset: loss function = cross-entropy
'''
# check the batch_size value
if(batch_size is None):
# Gradient descent
batch_size = len(data[0])
if(batch_size <= 0):
raise ValueError('batch size value is not correct')
# we want to keep the time needed for the training
start_time = time.time()
# reset variables
training_accuracies = []
validation_accuracies = []
n = len(data[0])
n_batches = ceil(len(data[0])/batch_size)
# transpose the dataset: will be useful later
dataset = np.transpose(data).tolist()
# initialize best results variables
best_val_acc = None
best_i2h_W = self.W_input_to_hidden
best_h2o_W = self.W_hidden_to_output
# number of batches
n_batch = int(len(data[0])/batch_size + 0.5)
# iterations over the training set
for it in range(iterations):
# change the order of the samples
random.shuffle(dataset)
# retrieve features and labels
inputs = [x[0] for x in dataset]
targets = [x[1] for x in dataset]
# we want to compute the losses for the last iteration
error = 0.0
xe = 0.0
# iterations over all the batches
for i in range(0, len(inputs), batch_size):
# reset NN output matrix
self.o_output = np.ones((batch_size, self.output_layer_size))
# define the current batch
end = min(i+batch_size, len(inputs))
batch_inputs = inputs[i : end]
batch_targets = targets[i : end]
# feedforward step
self.feedforward_xe(batch_inputs)
# backpropagation step
self.backpropagate_xe(batch_targets, learning_rate=learning_rate)
# compute the error on the current batch
xe -= np.sum(np.multiply(
batch_targets,
np.log(self.o_output)
))
error += np.sum(1.0/n * np.square(batch_targets - self.o_output))
# compute accuracies, to be printed for the final graph
training_accuracies.append(100*self.predict_xe(data)/len(data[0]))
validation_accuracies.append(100*self.predict_xe(validation_data)/len(validation_data[0]))
# ...better results obtained?
if best_val_acc is None or validation_accuracies[-1] > best_val_acc:
best_i2h_W = self.W_input_to_hidden
best_h2o_W = self.W_hidden_to_output
best_val_acc = validation_accuracies[-1]
# save best results as parameters of the NN
self.W_input_to_hidden = best_i2h_W
self.W_hidden_to_output = best_h2o_W
# display obtained results
print("Training time:", time.time()-start_time)
print("MSE loss:", error)
print("XE loss:", xe)
plot_train_val(t=range(1, iterations+1), st=training_accuracies, sv=validation_accuracies, hn=self.hidden_layer_size-1, lr=learning_rate)
def predict(self, test_data):
"""
Evaluate performance by counting how many examples in test_data are correctly evaluated.
"""
# reset NN output matrix
self.o_output = np.ones((len(test_data[0]), self.output_layer_size))
# feedforward the data
self.feedforward(test_data[0])
answer = np.argmax(test_data[1], axis=1).reshape(len(test_data[0]),1)
prediction = np.argmax(self.o_output, axis=1).reshape(len(test_data[0]), 1)
count = len(test_data[0]) - np.count_nonzero(answer - prediction)
return count
def predict_xe(self, test_data):
"""
Evaluate performance by counting how many examples in test_data are correctly evaluated.
"""
# reset NN output matrix
self.o_output = np.ones((len(test_data[0]), self.output_layer_size))
# feedforward the data
self.feedforward_xe(test_data[0])
answer = np.argmax(test_data[1], axis=1).reshape(len(test_data[0]),1)
prediction = np.argmax(self.o_output, axis=1).reshape(len(test_data[0]), 1)
count = len(test_data[0]) - np.count_nonzero(answer - prediction)
return count
| 43.845794 | 146 | 0.611745 | 1,219 | 9,383 | 4.497949 | 0.164069 | 0.039394 | 0.010943 | 0.017509 | 0.792814 | 0.746672 | 0.728798 | 0.728798 | 0.728798 | 0.728798 | 0 | 0.016424 | 0.305659 | 9,383 | 213 | 147 | 44.051643 | 0.825173 | 0.23468 | 0 | 0.583333 | 0 | 0 | 0.015416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.058333 | 0 | 0.133333 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
db8d2f6d35b453d92c765336c6c832ea5858fb00 | 4,805 | py | Python | control_limits/src/vis.py | papelero/control-limit-search | c3b8dfbc22e1c895f7e868f9243e21cee7599444 | [
"MIT"
] | null | null | null | control_limits/src/vis.py | papelero/control-limit-search | c3b8dfbc22e1c895f7e868f9243e21cee7599444 | [
"MIT"
] | null | null | null | control_limits/src/vis.py | papelero/control-limit-search | c3b8dfbc22e1c895f7e868f9243e21cee7599444 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import seaborn as sns
def plot_control_limits(train_data, test_data, train_labels, test_labels, training_output, testing_output):
"""Show the control limits
:param train_data: train data
:param test_data: test data
:param train_labels: train labels
:param test_labels: test labels
:param training_output: output training
:param testing_output: output testing
"""
fig, axs = plt.subplots(1, 2, figsize=(5.512, 2.168))
# Working with the train data
# Plot the normal data
axs[0].plot(train_data[train_labels == 1, :].T, color='darkgreen', lw=0.5, alpha=0.1, zorder=0)
# Access control limits and its time steps
time_steps, control_limits = training_output["time_steps"], training_output["control_limits"]
# Access the resulting false positive and false negative
data_false_negative, data_false_positive = training_output["false_negative"], training_output["false_positive"]
# Plot the false positive
for false_positive in data_false_positive[0]:
if false_positive.size != 0:
axs[0].plot(false_positive, color='darkred', lw=0.5, label='FP')
# Plot the false negative
for false_negative in data_false_negative[0]:
if false_negative.size != 0:
axs[0].plot(false_negative, color='darkgreen', lw=0.5, ls='--', label='FN')
# Plot the control limits
for i in range(len(time_steps)):
axs[0].plot(time_steps[i], control_limits[i][0], color='navy', ls='--', lw=0.8, zorder=2)
axs[0].plot([time_steps[i][0], time_steps[i][0]], [control_limits[i][0][0], control_limits[i][1][0]],
color='navy', ls='--', lw=0.8, zorder=2)
axs[0].plot(time_steps[i], control_limits[i][1], color='navy', ls='--', lw=0.8, zorder=2)
axs[0].plot([time_steps[i][-1], time_steps[i][-1]], [control_limits[i][0][-1], control_limits[i][1][-1]],
color='navy', ls='--', lw=0.8, zorder=2)
axs[0].fill_between(time_steps[i], control_limits[i][0], control_limits[i][1], facecolor='navy', alpha=0.25)
axs[0].set_xlabel('$X$', fontsize=7)
axs[0].set_ylabel('$Y$', fontsize=7)
axs[0].set_title('Training', fontsize=7)
axs[0].set_xticks([])
axs[0].set_yticks([])
handles, labels = axs[0].get_legend_handles_labels()
unique = [(h, l) for i, (h, l) in enumerate(zip(handles, labels)) if l not in labels[:i]]
if len(unique) != 0:
axs[0].legend(*zip(*unique),
bbox_to_anchor=(0.275, 0.675),
fontsize=7,
edgecolor='white',
facecolor='white')
# Working with the test data
# Plot the normal data
axs[1].plot(test_data[test_labels == 1, :].T, color='darkgreen', lw=0.5, alpha=0.1, zorder=0)
# Access control limits and its time steps
time_steps, control_limits = testing_output["time_steps"], testing_output["control_limits"]
# Access the resulting false positive and false negative
data_false_negative, data_false_positive = testing_output["false_negative"], testing_output["false_positive"]
# Plot the false positive
for false_positive in data_false_positive[0]:
if false_positive.size != 0:
axs[1].plot(false_positive, color='darkred', lw=0.5, label='FP')
# Plot the false negative
for false_negative in data_false_negative[0]:
if false_negative.size != 0:
axs[1].plot(false_negative, color='darkgreen', lw=0.5, ls='--', label='FN')
# Plot the control limits
for i in range(len(time_steps)):
axs[1].plot(time_steps[i], control_limits[i][0], color='navy', ls='--', lw=0.8, zorder=2)
axs[1].plot([time_steps[i][0], time_steps[i][0]], [control_limits[i][0][0], control_limits[i][1][0]],
color='navy', ls='--', lw=0.8, zorder=2)
axs[1].plot(time_steps[i], control_limits[i][1], color='navy', ls='--', lw=0.8, zorder=2)
axs[1].plot([time_steps[i][-1], time_steps[i][-1]], [control_limits[i][0][-1], control_limits[i][1][-1]],
color='navy', ls='--', lw=0.8, zorder=2)
axs[1].fill_between(time_steps[i], control_limits[i][0], control_limits[i][1], facecolor='navy', alpha=0.25)
axs[1].set_xlabel('$X$', fontsize=7)
axs[1].set_title('Testing', fontsize=7)
axs[1].set_xticks([])
axs[1].set_yticks([])
handles, labels = axs[1].get_legend_handles_labels()
unique = [(h, l) for i, (h, l) in enumerate(zip(handles, labels)) if l not in labels[:i]]
axs[1].legend(*zip(*unique),
bbox_to_anchor=(0.275, 0.675),
fontsize=7,
edgecolor='white',
facecolor='white')
sns.despine()
fig.tight_layout()
plt.show()
| 44.490741 | 116 | 0.621852 | 726 | 4,805 | 3.950413 | 0.132231 | 0.117852 | 0.078103 | 0.039052 | 0.772664 | 0.735704 | 0.702232 | 0.702232 | 0.702232 | 0.701534 | 0 | 0.040316 | 0.210198 | 4,805 | 107 | 117 | 44.906542 | 0.715415 | 0.137357 | 0 | 0.369231 | 0 | 0 | 0.064926 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015385 | false | 0 | 0.030769 | 0 | 0.046154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
dbbe19eecf0c636e5d32c16a34e93361622ba71f | 125 | py | Python | feed/admin.py | ComputerTutor-12thClass/hortihub | 32522a70ee550da9c859f1671f05099503344e7f | [
"MIT"
] | 12 | 2018-07-07T22:35:27.000Z | 2022-03-07T12:23:09.000Z | feed/admin.py | ComputerTutor-12thClass/hortihub | 32522a70ee550da9c859f1671f05099503344e7f | [
"MIT"
] | 5 | 2018-07-07T22:50:11.000Z | 2021-09-07T23:53:09.000Z | feed/admin.py | ComputerTutor-12thClass/hortihub | 32522a70ee550da9c859f1671f05099503344e7f | [
"MIT"
] | 5 | 2020-08-28T15:04:16.000Z | 2021-08-13T01:37:15.000Z | from django.contrib import admin
from feed.models import UserPost
# Register your models here.
admin.site.register(UserPost)
| 25 | 32 | 0.824 | 18 | 125 | 5.722222 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112 | 125 | 4 | 33 | 31.25 | 0.927928 | 0.208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
91e1162d3f753662e94bcb4a33a7962721d1c5c1 | 74 | py | Python | src/dual_quaternions_ros/__init__.py | Achllle/dual_quaternions_ros | aa9f4c1ccc6460e5259e2ca301accb46bb5d81cd | [
"MIT"
] | 28 | 2019-11-18T22:49:49.000Z | 2021-09-23T07:57:03.000Z | src/dual_quaternions_ros/__init__.py | Achllle/dual_quaternions_ros | aa9f4c1ccc6460e5259e2ca301accb46bb5d81cd | [
"MIT"
] | 31 | 2019-02-06T22:48:35.000Z | 2020-08-14T09:49:40.000Z | src/dual_quaternions_ros/__init__.py | Achllle/dual_quaternions_ros | aa9f4c1ccc6460e5259e2ca301accb46bb5d81cd | [
"MIT"
] | 7 | 2019-03-05T14:44:50.000Z | 2021-06-20T06:38:17.000Z | from .dual_quaternions_ros import from_ros, to_ros_pose, to_ros_transform
| 37 | 73 | 0.878378 | 13 | 74 | 4.461538 | 0.615385 | 0.172414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 74 | 1 | 74 | 74 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
37db5cfa26b673317634f88b14ae2ca1f3968e5a | 22 | py | Python | examples/file_variable.py | doboy/Underscore | d98273db3144cda79191d2c90f45d81b6d700b1f | [
"MIT"
] | 7 | 2016-09-23T00:44:05.000Z | 2021-10-04T21:19:12.000Z | examples/file_variable.py | jameswu1991/Underscore | d98273db3144cda79191d2c90f45d81b6d700b1f | [
"MIT"
] | 1 | 2016-09-23T00:45:05.000Z | 2019-02-16T19:05:37.000Z | examples/file_variable.py | jameswu1991/Underscore | d98273db3144cda79191d2c90f45d81b6d700b1f | [
"MIT"
] | 3 | 2016-09-23T01:13:15.000Z | 2018-07-20T21:22:17.000Z | print(type(__file__))
| 11 | 21 | 0.772727 | 3 | 22 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 22 | 1 | 22 | 22 | 0.619048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
532ddc52aa944a35260530a092eed2c54899260a | 49 | py | Python | cnlp_annotator/__init__.py | szj2ys/cnlp_annotator | 1837d952a73ffe97b0e5c3523d51896e92572ce1 | [
"Apache-2.0"
] | null | null | null | cnlp_annotator/__init__.py | szj2ys/cnlp_annotator | 1837d952a73ffe97b0e5c3523d51896e92572ce1 | [
"Apache-2.0"
] | null | null | null | cnlp_annotator/__init__.py | szj2ys/cnlp_annotator | 1837d952a73ffe97b0e5c3523d51896e92572ce1 | [
"Apache-2.0"
] | null | null | null | from .__version__ import version, __version__
| 9.8 | 45 | 0.795918 | 5 | 49 | 6.2 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 49 | 4 | 46 | 12.25 | 0.756098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
72adecefc3a9d4a7f362641e6ffe6e9f576359ac | 396 | py | Python | tests/test_searchFlag.py | zomry1/Hystrix-Box | 52a3827a6bcfaf1838f8765ffc59a8c0068da497 | [
"MIT"
] | 7 | 2020-04-17T11:48:40.000Z | 2022-02-18T01:33:20.000Z | tests/test_searchFlag.py | zomry1/Hystrix-Box | 52a3827a6bcfaf1838f8765ffc59a8c0068da497 | [
"MIT"
] | 8 | 2020-04-17T14:25:00.000Z | 2020-04-20T21:00:02.000Z | tests/test_searchFlag.py | zomry1/Hystrix-Box | 52a3827a6bcfaf1838f8765ffc59a8c0068da497 | [
"MIT"
] | 2 | 2020-04-17T13:40:09.000Z | 2020-10-13T12:24:41.000Z | from HystrixBox.Utils.searchFlag import searchFlag
def test_search_flag_with_spaces():
assert (searchFlag('zomry1CTF{}','Hi there is zomry1CTF{This_is_an_example} an flag here?') == ['zomry1CTF{This_is_an_example}'])
def test_search_flag_without_spaces():
assert (searchFlag('zomry1CTF{}','Hi there iszomry1CTF{This_is_an_example}an flag here?') == ['zomry1CTF{This_is_an_example}'])
| 39.6 | 133 | 0.772727 | 55 | 396 | 5.2 | 0.4 | 0.083916 | 0.111888 | 0.20979 | 0.63986 | 0.608392 | 0.342657 | 0.342657 | 0.342657 | 0.342657 | 0 | 0.01676 | 0.09596 | 396 | 9 | 134 | 44 | 0.782123 | 0 | 0 | 0.4 | 0 | 0 | 0.474747 | 0.30303 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.4 | true | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
72f3fda3b2736aee1878f0627977b4fd31eb940c | 138 | py | Python | libs/yowsup/yowsup/yowsup/layers/protocol_acks/protocolentities/__init__.py | akshitpradhan/TomHack | 837226e7b38de1140c19bc2d478eeb9e379ed1fd | [
"MIT"
] | 22 | 2017-07-14T20:01:17.000Z | 2022-03-08T14:22:39.000Z | libs/yowsup/yowsup/yowsup/layers/protocol_acks/protocolentities/__init__.py | akshitpradhan/TomHack | 837226e7b38de1140c19bc2d478eeb9e379ed1fd | [
"MIT"
] | 6 | 2017-07-14T21:03:50.000Z | 2021-06-10T19:08:32.000Z | libs/yowsup/yowsup/yowsup/layers/protocol_acks/protocolentities/__init__.py | akshitpradhan/TomHack | 837226e7b38de1140c19bc2d478eeb9e379ed1fd | [
"MIT"
] | 13 | 2017-07-14T20:13:14.000Z | 2020-11-12T08:06:05.000Z | from .ack import AckProtocolEntity
from .ack_incoming import IncomingAckProtocolEntity
from .ack_outgoing import OutgoingAckProtocolEntity | 46 | 51 | 0.898551 | 14 | 138 | 8.714286 | 0.571429 | 0.172131 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07971 | 138 | 3 | 52 | 46 | 0.96063 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
72f5b765ab00d8fb544f00da09ae8b60eb76a940 | 50 | py | Python | molsysmt/help/__init__.py | dprada/molsysmt | 83f150bfe3cfa7603566a0ed4aed79d9b0c97f5d | [
"MIT"
] | null | null | null | molsysmt/help/__init__.py | dprada/molsysmt | 83f150bfe3cfa7603566a0ed4aed79d9b0c97f5d | [
"MIT"
] | null | null | null | molsysmt/help/__init__.py | dprada/molsysmt | 83f150bfe3cfa7603566a0ed4aed79d9b0c97f5d | [
"MIT"
] | null | null | null | from .help import forms, convert, viewers, select
| 25 | 49 | 0.78 | 7 | 50 | 5.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14 | 50 | 1 | 50 | 50 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f402164763096ecaca7a3e928a8483154f93137c | 177 | py | Python | applications/admin.py | benrpinto/ausimin | 37d9879a729f637ff03d030168904737fd415776 | [
"MIT"
] | null | null | null | applications/admin.py | benrpinto/ausimin | 37d9879a729f637ff03d030168904737fd415776 | [
"MIT"
] | null | null | null | applications/admin.py | benrpinto/ausimin | 37d9879a729f637ff03d030168904737fd415776 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import ApplContent
from home.admin import ContentAdmin
# Register your models here.
admin.site.register(ApplContent,ContentAdmin)
| 25.285714 | 45 | 0.836158 | 23 | 177 | 6.434783 | 0.565217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107345 | 177 | 6 | 46 | 29.5 | 0.936709 | 0.146893 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f4133ae0de3e40e95b01af119222381700d089cd | 44 | py | Python | parsertester/__init__.py | deep-compute/parsertester | 16bfc200ebc93778b74c72ae34f5d929d4bcdeda | [
"MIT"
] | null | null | null | parsertester/__init__.py | deep-compute/parsertester | 16bfc200ebc93778b74c72ae34f5d929d4bcdeda | [
"MIT"
] | 2 | 2016-08-09T15:57:32.000Z | 2019-08-04T08:01:01.000Z | parsertester/__init__.py | deep-compute/parsertester | 16bfc200ebc93778b74c72ae34f5d929d4bcdeda | [
"MIT"
] | null | null | null | from parsertester import ParserTester, main
| 22 | 43 | 0.863636 | 5 | 44 | 7.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 44 | 1 | 44 | 44 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f413cd7e11a8455dec62b8f5c738e675755d8149 | 83 | py | Python | ind_3.py | LokiTheGodOfBitchez/Lab_5 | 4ef27d0511df1d81a1e52af694d528629153b5f9 | [
"MIT"
] | null | null | null | ind_3.py | LokiTheGodOfBitchez/Lab_5 | 4ef27d0511df1d81a1e52af694d528629153b5f9 | [
"MIT"
] | null | null | null | ind_3.py | LokiTheGodOfBitchez/Lab_5 | 4ef27d0511df1d81a1e52af694d528629153b5f9 | [
"MIT"
] | null | null | null | n = 20
a = 0
while n < 100:
n += 1
if n % 3 == 0:
a += n
print(a)
| 9.222222 | 18 | 0.349398 | 17 | 83 | 1.705882 | 0.588235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209302 | 0.481928 | 83 | 8 | 19 | 10.375 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f417cb22c905719ffd1861ea4b012a43ab0cbe8f | 102 | py | Python | api/src/schemas/like.py | henriqueblang/api-archfolio | ef00c98c631086b89077a7866b9f504d447634cc | [
"MIT"
] | null | null | null | api/src/schemas/like.py | henriqueblang/api-archfolio | ef00c98c631086b89077a7866b9f504d447634cc | [
"MIT"
] | null | null | null | api/src/schemas/like.py | henriqueblang/api-archfolio | ef00c98c631086b89077a7866b9f504d447634cc | [
"MIT"
] | null | null | null | from typing import Optional
from pydantic import BaseModel
class Like(BaseModel):
user_id: int
| 12.75 | 30 | 0.77451 | 14 | 102 | 5.571429 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186275 | 102 | 7 | 31 | 14.571429 | 0.939759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f46db841f9599b9cca8df82e79d63defde4b3a9f | 11,644 | py | Python | zerver/tests/test_unread.py | erinis-eligro/Zulip-outcast | 51153a6ce219370aee79bfe462f6e4fb956993d9 | [
"Apache-2.0"
] | null | null | null | zerver/tests/test_unread.py | erinis-eligro/Zulip-outcast | 51153a6ce219370aee79bfe462f6e4fb956993d9 | [
"Apache-2.0"
] | 1 | 2019-11-02T09:06:05.000Z | 2019-11-02T09:06:05.000Z | zerver/tests/test_unread.py | erinis-eligro/zulip-outcasts | 51153a6ce219370aee79bfe462f6e4fb956993d9 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-AA
from __future__ import absolute_import
from typing import Any, Dict, List
from zerver.models import (
get_user_profile_by_email, Recipient, UserMessage
)
from zerver.lib.test_helpers import tornado_redirected_to_list
from zerver.lib.test_classes import (
ZulipTestCase,
)
import ujson
class PointerTest(ZulipTestCase):
def test_update_pointer(self):
# type: () -> None
"""
Posting a pointer to /update (in the form {"pointer": pointer}) changes
the pointer we store for your UserProfile.
"""
self.login("hamlet@zulip.com")
self.assertEqual(get_user_profile_by_email("hamlet@zulip.com").pointer, -1)
msg_id = self.send_message("othello@zulip.com", "Verona", Recipient.STREAM)
result = self.client_put("/json/users/me/pointer", {"pointer": msg_id})
self.assert_json_success(result)
self.assertEqual(get_user_profile_by_email("hamlet@zulip.com").pointer, msg_id)
def test_api_update_pointer(self):
# type: () -> None
"""
Same as above, but for the API view
"""
email = "hamlet@zulip.com"
self.assertEqual(get_user_profile_by_email(email).pointer, -1)
msg_id = self.send_message("othello@zulip.com", "Verona", Recipient.STREAM)
result = self.client_put("/api/v1/users/me/pointer", {"pointer": msg_id},
**self.api_auth(email))
self.assert_json_success(result)
self.assertEqual(get_user_profile_by_email(email).pointer, msg_id)
def test_missing_pointer(self):
# type: () -> None
"""
Posting json to /json/users/me/pointer which does not contain a pointer key/value pair
returns a 400 and error message.
"""
self.login("hamlet@zulip.com")
self.assertEqual(get_user_profile_by_email("hamlet@zulip.com").pointer, -1)
result = self.client_put("/json/users/me/pointer", {"foo": 1})
self.assert_json_error(result, "Missing 'pointer' argument")
self.assertEqual(get_user_profile_by_email("hamlet@zulip.com").pointer, -1)
def test_invalid_pointer(self):
# type: () -> None
"""
Posting json to /json/users/me/pointer with an invalid pointer returns a 400 and error
message.
"""
self.login("hamlet@zulip.com")
self.assertEqual(get_user_profile_by_email("hamlet@zulip.com").pointer, -1)
result = self.client_put("/json/users/me/pointer", {"pointer": "foo"})
self.assert_json_error(result, "Bad value for 'pointer': foo")
self.assertEqual(get_user_profile_by_email("hamlet@zulip.com").pointer, -1)
def test_pointer_out_of_range(self):
# type: () -> None
"""
Posting json to /json/users/me/pointer with an out of range (< 0) pointer returns a 400
and error message.
"""
self.login("hamlet@zulip.com")
self.assertEqual(get_user_profile_by_email("hamlet@zulip.com").pointer, -1)
result = self.client_put("/json/users/me/pointer", {"pointer": -2})
self.assert_json_error(result, "Bad value for 'pointer': -2")
self.assertEqual(get_user_profile_by_email("hamlet@zulip.com").pointer, -1)
class UnreadCountTests(ZulipTestCase):
def setUp(self):
# type: () -> None
self.unread_msg_ids = [self.send_message(
"iago@zulip.com", "hamlet@zulip.com", Recipient.PERSONAL, "hello"),
self.send_message(
"iago@zulip.com", "hamlet@zulip.com", Recipient.PERSONAL, "hello2")]
# Sending a new message results in unread UserMessages being created
def test_new_message(self):
# type: () -> None
self.login("hamlet@zulip.com")
content = "Test message for unset read bit"
last_msg = self.send_message("hamlet@zulip.com", "Verona", Recipient.STREAM, content)
user_messages = list(UserMessage.objects.filter(message=last_msg))
self.assertEqual(len(user_messages) > 0, True)
for um in user_messages:
self.assertEqual(um.message.content, content)
if um.user_profile.email != "hamlet@zulip.com":
self.assertFalse(um.flags.read)
def test_update_flags(self):
# type: () -> None
self.login("hamlet@zulip.com")
result = self.client_post("/json/messages/flags",
{"messages": ujson.dumps(self.unread_msg_ids),
"op": "add",
"flag": "read"})
self.assert_json_success(result)
# Ensure we properly set the flags
found = 0
for msg in self.get_old_messages():
if msg['id'] in self.unread_msg_ids:
self.assertEqual(msg['flags'], ['read'])
found += 1
self.assertEqual(found, 2)
result = self.client_post("/json/messages/flags",
{"messages": ujson.dumps([self.unread_msg_ids[1]]),
"op": "remove", "flag": "read"})
self.assert_json_success(result)
# Ensure we properly remove just one flag
for msg in self.get_old_messages():
if msg['id'] == self.unread_msg_ids[0]:
self.assertEqual(msg['flags'], ['read'])
elif msg['id'] == self.unread_msg_ids[1]:
self.assertEqual(msg['flags'], [])
def test_update_all_flags(self):
# type: () -> None
self.login("hamlet@zulip.com")
message_ids = [self.send_message("hamlet@zulip.com", "iago@zulip.com",
Recipient.PERSONAL, "test"),
self.send_message("hamlet@zulip.com", "cordelia@zulip.com",
Recipient.PERSONAL, "test2")]
result = self.client_post("/json/messages/flags", {"messages": ujson.dumps(message_ids),
"op": "add",
"flag": "read"})
self.assert_json_success(result)
result = self.client_post("/json/messages/flags", {"messages": ujson.dumps([]),
"op": "remove",
"flag": "read",
"all": ujson.dumps(True)})
self.assert_json_success(result)
for msg in self.get_old_messages():
self.assertEqual(msg['flags'], [])
def test_mark_all_in_stream_read(self):
# type: () -> None
self.login("hamlet@zulip.com")
user_profile = get_user_profile_by_email("hamlet@zulip.com")
self.subscribe_to_stream(user_profile.email, "test_stream", user_profile.realm)
message_id = self.send_message("hamlet@zulip.com", "test_stream", Recipient.STREAM, "hello")
unrelated_message_id = self.send_message("hamlet@zulip.com", "Denmark", Recipient.STREAM, "hello")
events = [] # type: List[Dict[str, Any]]
with tornado_redirected_to_list(events):
result = self.client_post("/json/messages/flags", {"messages": ujson.dumps([]),
"op": "add",
"flag": "read",
"stream_name": "test_stream"})
self.assert_json_success(result)
self.assertTrue(len(events) == 1)
event = events[0]['event']
expected = dict(operation='add',
messages=[message_id],
flag='read',
type='update_message_flags',
all=False)
differences = [key for key in expected if expected[key] != event[key]]
self.assertTrue(len(differences) == 0)
um = list(UserMessage.objects.filter(message=message_id))
for msg in um:
if msg.user_profile.email == "hamlet@zulip.com":
self.assertTrue(msg.flags.read)
else:
self.assertFalse(msg.flags.read)
unrelated_messages = list(UserMessage.objects.filter(message=unrelated_message_id))
for msg in unrelated_messages:
if msg.user_profile.email == "hamlet@zulip.com":
self.assertFalse(msg.flags.read)
def test_mark_all_in_invalid_stream_read(self):
# type: () -> None
self.login("hamlet@zulip.com")
invalid_stream_name = ""
result = self.client_post("/json/messages/flags", {"messages": ujson.dumps([]),
"op": "add",
"flag": "read",
"stream_name": invalid_stream_name})
self.assert_json_error(result, 'No such stream \'\'')
def test_mark_all_in_stream_topic_read(self):
# type: () -> None
self.login("hamlet@zulip.com")
user_profile = get_user_profile_by_email("hamlet@zulip.com")
self.subscribe_to_stream(user_profile.email, "test_stream", user_profile.realm)
message_id = self.send_message("hamlet@zulip.com", "test_stream", Recipient.STREAM, "hello", "test_topic")
unrelated_message_id = self.send_message("hamlet@zulip.com", "Denmark", Recipient.STREAM, "hello", "Denmark2")
events = [] # type: List[Dict[str, Any]]
with tornado_redirected_to_list(events):
result = self.client_post("/json/messages/flags", {"messages": ujson.dumps([]),
"op": "add",
"flag": "read",
"topic_name": "test_topic",
"stream_name": "test_stream"})
self.assert_json_success(result)
self.assertTrue(len(events) == 1)
event = events[0]['event']
expected = dict(operation='add',
messages=[message_id],
flag='read',
type='update_message_flags',
all=False)
differences = [key for key in expected if expected[key] != event[key]]
self.assertTrue(len(differences) == 0)
um = list(UserMessage.objects.filter(message=message_id))
for msg in um:
if msg.user_profile.email == "hamlet@zulip.com":
self.assertTrue(msg.flags.read)
unrelated_messages = list(UserMessage.objects.filter(message=unrelated_message_id))
for msg in unrelated_messages:
if msg.user_profile.email == "hamlet@zulip.com":
self.assertFalse(msg.flags.read)
def test_mark_all_in_invalid_topic_read(self):
# type: () -> None
self.login("hamlet@zulip.com")
invalid_topic_name = "abc"
result = self.client_post("/json/messages/flags", {"messages": ujson.dumps([]),
"op": "add",
"flag": "read",
"topic_name": invalid_topic_name,
"stream_name": "Denmark"})
self.assert_json_error(result, 'No such topic \'abc\'')
| 45.84252 | 118 | 0.54775 | 1,266 | 11,644 | 4.838863 | 0.131122 | 0.054848 | 0.082272 | 0.049625 | 0.813745 | 0.767548 | 0.725433 | 0.705354 | 0.692295 | 0.658342 | 0 | 0.004993 | 0.329182 | 11,644 | 253 | 119 | 46.023715 | 0.779286 | 0.078409 | 0 | 0.558659 | 0 | 0 | 0.154662 | 0.010601 | 0 | 0 | 0 | 0 | 0.223464 | 1 | 0.072626 | false | 0 | 0.03352 | 0 | 0.117318 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
be2f534b250c0eca47c38a281105cb757b7d519d | 118 | py | Python | HackerRank/Python_Learn/03_Strings/01_Swap_Case.py | Zubieta/CPP | fb4a3cbf2e4edcc590df15663cd28fb9ecab679c | [
"MIT"
] | 8 | 2017-03-02T07:56:45.000Z | 2021-08-07T20:20:19.000Z | HackerRank/Python_Learn/03_Strings/01_Swap_Case.py | zubie7a/Algorithms | fb4a3cbf2e4edcc590df15663cd28fb9ecab679c | [
"MIT"
] | null | null | null | HackerRank/Python_Learn/03_Strings/01_Swap_Case.py | zubie7a/Algorithms | fb4a3cbf2e4edcc590df15663cd28fb9ecab679c | [
"MIT"
] | 1 | 2021-08-07T20:20:20.000Z | 2021-08-07T20:20:20.000Z | # https://www.hackerrank.com/challenges/swap-case
def swap_case(s):
# Swap a string case.
return s.swapcase()
| 23.6 | 49 | 0.694915 | 18 | 118 | 4.5 | 0.722222 | 0.197531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161017 | 118 | 4 | 50 | 29.5 | 0.818182 | 0.567797 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
be41ba35b533183a0e9c3b26fcc18f0c63c4e006 | 85 | py | Python | mlpipe/callbacks/__init__.py | j-o-d-o/MLPipe-Trainer | b686dc4d28e3d4cd2c6581487f8a2491a6d7cb60 | [
"MIT"
] | null | null | null | mlpipe/callbacks/__init__.py | j-o-d-o/MLPipe-Trainer | b686dc4d28e3d4cd2c6581487f8a2491a6d7cb60 | [
"MIT"
] | null | null | null | mlpipe/callbacks/__init__.py | j-o-d-o/MLPipe-Trainer | b686dc4d28e3d4cd2c6581487f8a2491a6d7cb60 | [
"MIT"
] | null | null | null | from .save_to_mongodb import SaveToMongoDB
from .update_manager import UpdateManager
| 28.333333 | 42 | 0.882353 | 11 | 85 | 6.545455 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094118 | 85 | 2 | 43 | 42.5 | 0.935065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
be4edfc62bf8c3e3a6371d579ef11f1a0a11076f | 125 | py | Python | djangoserver/src/merchants/admin.py | Higgins723/web-homework | f10d33fd6c5dface9350f95dab8ba2dcc7c9660f | [
"MIT"
] | null | null | null | djangoserver/src/merchants/admin.py | Higgins723/web-homework | f10d33fd6c5dface9350f95dab8ba2dcc7c9660f | [
"MIT"
] | null | null | null | djangoserver/src/merchants/admin.py | Higgins723/web-homework | f10d33fd6c5dface9350f95dab8ba2dcc7c9660f | [
"MIT"
] | 1 | 2022-01-19T06:55:41.000Z | 2022-01-19T06:55:41.000Z | from django.contrib import admin
from .models import Merchants
# Register your models here.
admin.site.register(Merchants)
| 17.857143 | 32 | 0.808 | 17 | 125 | 5.941176 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128 | 125 | 6 | 33 | 20.833333 | 0.926606 | 0.208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
be8210bc177616ccb55e0288fd17aaf0a102423d | 77 | py | Python | tests/fake_plugins/core/error/__init__.py | DiscordFederation/Enigma | e337284c1750c45be46407a0baa19afe2b4eb88e | [
"Apache-2.0"
] | 6 | 2019-05-17T12:56:05.000Z | 2019-12-13T02:03:08.000Z | tests/fake_plugins/core/error/__init__.py | DiscordFederation/Enigma | e337284c1750c45be46407a0baa19afe2b4eb88e | [
"Apache-2.0"
] | 17 | 2019-01-24T04:15:49.000Z | 2020-05-14T14:04:04.000Z | tests/fake_plugins/core/error/__init__.py | OpenDebates/Erin | e337284c1750c45be46407a0baa19afe2b4eb88e | [
"Apache-2.0"
] | 1 | 2018-05-12T03:57:18.000Z | 2018-05-12T03:57:18.000Z | from tests.fake_plugins.core.error.command_error_handler import CommandError
| 38.5 | 76 | 0.896104 | 11 | 77 | 6 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051948 | 77 | 1 | 77 | 77 | 0.90411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
be8e8e3c3103410115e3eca7bf9a4e746956fef2 | 247 | py | Python | toontown/toon/DistributedToonUD.py | TheFamiliarScoot/open-toontown | 678313033174ea7d08e5c2823bd7b473701ff547 | [
"BSD-3-Clause"
] | 99 | 2019-11-02T22:25:00.000Z | 2022-02-03T03:48:00.000Z | toontown/toon/DistributedToonUD.py | TheFamiliarScoot/open-toontown | 678313033174ea7d08e5c2823bd7b473701ff547 | [
"BSD-3-Clause"
] | 42 | 2019-11-03T05:31:08.000Z | 2022-03-16T22:50:32.000Z | toontown/toon/DistributedToonUD.py | TheFamiliarScoot/open-toontown | 678313033174ea7d08e5c2823bd7b473701ff547 | [
"BSD-3-Clause"
] | 57 | 2019-11-03T07:47:37.000Z | 2022-03-22T00:41:49.000Z | from direct.directnotify import DirectNotifyGlobal
from direct.distributed.DistributedObjectUD import DistributedObjectUD
class DistributedToonUD(DistributedObjectUD):
notify = DirectNotifyGlobal.directNotify.newCategory('DistributedToonUD')
| 41.166667 | 77 | 0.874494 | 19 | 247 | 11.368421 | 0.578947 | 0.092593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072874 | 247 | 5 | 78 | 49.4 | 0.943231 | 0 | 0 | 0 | 0 | 0 | 0.068826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
bec424cc48e1d749472344e22769d2dfecf16471 | 38 | py | Python | library/source2/resource_types/vwrld/__init__.py | anderlli0053/SourceIO | 3c0c4839939ce698439987ac52154f89ee2f5341 | [
"MIT"
] | 199 | 2019-04-02T02:30:58.000Z | 2022-03-30T21:29:49.000Z | library/source2/resource_types/vwrld/__init__.py | anderlli0053/SourceIO | 3c0c4839939ce698439987ac52154f89ee2f5341 | [
"MIT"
] | 113 | 2019-03-03T19:36:25.000Z | 2022-03-31T19:44:05.000Z | library/source2/resource_types/vwrld/__init__.py | anderlli0053/SourceIO | 3c0c4839939ce698439987ac52154f89ee2f5341 | [
"MIT"
] | 38 | 2019-05-15T16:49:30.000Z | 2022-03-22T03:40:43.000Z | from .world import ValveCompiledWorld
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
fe33e1ec427244059fd37837a902bedcf3da1523 | 79 | py | Python | src/main.py | Reeperto/ev3printer | 2e7d48b64bbe53c6639cbb318472df61aba4d82e | [
"MIT"
] | 1 | 2022-03-21T19:22:45.000Z | 2022-03-21T19:22:45.000Z | src/main.py | Reeperto/ev3printer | 2e7d48b64bbe53c6639cbb318472df61aba4d82e | [
"MIT"
] | null | null | null | src/main.py | Reeperto/ev3printer | 2e7d48b64bbe53c6639cbb318472df61aba4d82e | [
"MIT"
] | null | null | null | # TODO: Make main application code; currently doing all submodules and commands | 79 | 79 | 0.822785 | 11 | 79 | 5.909091 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139241 | 79 | 1 | 79 | 79 | 0.955882 | 0.974684 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fe8b5d634df019e8c0f074d6953d946b6b080d83 | 79 | py | Python | lib/data_structures/__init__.py | carmocca/UVA | c02bf55fc444309c94d938618911f22b0d9a14e1 | [
"MIT"
] | 3 | 2019-05-05T06:00:06.000Z | 2021-02-25T19:03:32.000Z | lib/data_structures/__init__.py | carmocca/UVA | c02bf55fc444309c94d938618911f22b0d9a14e1 | [
"MIT"
] | null | null | null | lib/data_structures/__init__.py | carmocca/UVA | c02bf55fc444309c94d938618911f22b0d9a14e1 | [
"MIT"
] | 3 | 2019-10-16T15:42:58.000Z | 2021-04-11T16:50:20.000Z | from .disjoint_set import DisjointSet
from .priority_queue import PriorityQueue | 39.5 | 41 | 0.886076 | 10 | 79 | 6.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088608 | 79 | 2 | 41 | 39.5 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
fea061c31942d33a2b3062eb8e3058cbf295029e | 77 | py | Python | mumu/model/__init__.py | mingminyu/mumu | e9f6c86a0b678ce4467ffba7f3dc4c0c8f971ff8 | [
"Apache-2.0"
] | 1 | 2021-06-22T16:57:28.000Z | 2021-06-22T16:57:28.000Z | mumu/model/__init__.py | mingminyu/mumu | e9f6c86a0b678ce4467ffba7f3dc4c0c8f971ff8 | [
"Apache-2.0"
] | null | null | null | mumu/model/__init__.py | mingminyu/mumu | e9f6c86a0b678ce4467ffba7f3dc4c0c8f971ff8 | [
"Apache-2.0"
] | null | null | null | from ._evaluate import plot_ks_auc, plot_score_dist, plot_slopping, plot_all
| 38.5 | 76 | 0.857143 | 13 | 77 | 4.538462 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 77 | 1 | 77 | 77 | 0.842857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
22a56e24b0968236e31d7d1730fb523c5a728a1d | 99 | py | Python | Chapter09/ch9_input5.py | PacktPublishing/Applied-Computational-Thinking-with-Python | fd9982383c5b473ffa1640998540d602876816e5 | [
"MIT"
] | 18 | 2020-11-27T22:41:12.000Z | 2021-12-27T08:20:46.000Z | Chapter09/ch9_input5.py | PacktPublishing/Applied-Computational-Thinking-with-Python | fd9982383c5b473ffa1640998540d602876816e5 | [
"MIT"
] | null | null | null | Chapter09/ch9_input5.py | PacktPublishing/Applied-Computational-Thinking-with-Python | fd9982383c5b473ffa1640998540d602876816e5 | [
"MIT"
] | 8 | 2020-11-30T17:51:11.000Z | 2021-12-25T05:23:02.000Z | name1, name2 = input("Enter First Name: "), input("Enter Last Name: ")
print(name1 + " " + name2)
| 24.75 | 70 | 0.626263 | 13 | 99 | 4.769231 | 0.615385 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049383 | 0.181818 | 99 | 3 | 71 | 33 | 0.716049 | 0 | 0 | 0 | 0 | 0 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
22c4bc2f1399c2fc1d1d54d854a1647f487b4f50 | 50 | py | Python | qulacsvis/__init__.py | Qulacs-Osaka/qulacs-visualizer | 5e62be697eea4b75a654d5b53603899f7dfe3749 | [
"MIT"
] | 1 | 2021-09-13T11:30:04.000Z | 2021-09-13T11:30:04.000Z | qulacsvis/__init__.py | Qulacs-Osaka/qulacs-visualizer | 5e62be697eea4b75a654d5b53603899f7dfe3749 | [
"MIT"
] | 39 | 2021-09-07T05:05:38.000Z | 2022-03-14T04:42:57.000Z | qulacsvis/__init__.py | Qulacs-Osaka/qulacs-visualizer | 5e62be697eea4b75a654d5b53603899f7dfe3749 | [
"MIT"
] | 1 | 2022-01-21T06:11:40.000Z | 2022-01-21T06:11:40.000Z | from .visualization import circuit_drawer # noqa
| 25 | 49 | 0.82 | 6 | 50 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14 | 50 | 1 | 50 | 50 | 0.930233 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
22f465948349b34d21b32704ba69af6ce7af66c3 | 492 | py | Python | django_jinja/views/generic/edit.py | theY4Kman/django-jinja | 03e05b6689582a0af4b82d93f188ecbcb7a85f23 | [
"BSD-3-Clause"
] | 210 | 2015-05-21T16:54:05.000Z | 2022-01-06T01:24:52.000Z | django_jinja/views/generic/edit.py | theY4Kman/django-jinja | 03e05b6689582a0af4b82d93f188ecbcb7a85f23 | [
"BSD-3-Clause"
] | 139 | 2015-05-15T11:01:03.000Z | 2022-03-29T21:13:04.000Z | django_jinja/views/generic/edit.py | theY4Kman/django-jinja | 03e05b6689582a0af4b82d93f188ecbcb7a85f23 | [
"BSD-3-Clause"
] | 84 | 2015-05-15T09:35:22.000Z | 2021-09-03T13:14:44.000Z | from django.views.generic.edit import CreateView as _django_CreateView
from django.views.generic.edit import DeleteView as _django_DeleteView
from django.views.generic.edit import UpdateView as _django_UpdateView
from .base import Jinja2TemplateResponseMixin
class CreateView(Jinja2TemplateResponseMixin, _django_CreateView):
pass
class DeleteView(Jinja2TemplateResponseMixin, _django_DeleteView):
pass
class UpdateView(Jinja2TemplateResponseMixin, _django_UpdateView):
pass
| 30.75 | 70 | 0.851626 | 52 | 492 | 7.826923 | 0.269231 | 0.07371 | 0.110565 | 0.162162 | 0.235872 | 0.235872 | 0 | 0 | 0 | 0 | 0 | 0.00907 | 0.103659 | 492 | 15 | 71 | 32.8 | 0.913832 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.3 | 0.4 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
22f5b0ee40e2771405c4472ff39bc7e0320e05f1 | 144 | py | Python | pytest_reorder/__init__.py | not-raspberry/pytest_reorder | 0cc9333c1641fcbb426791bcbcf52d6e50530eed | [
"MIT"
] | 4 | 2016-04-10T00:11:38.000Z | 2019-06-21T02:43:09.000Z | pytest_reorder/__init__.py | not-raspberry/pytest_reorder | 0cc9333c1641fcbb426791bcbcf52d6e50530eed | [
"MIT"
] | 3 | 2016-05-13T13:44:10.000Z | 2018-05-29T23:05:30.000Z | pytest_reorder/__init__.py | not-raspberry/pytest_reorder | 0cc9333c1641fcbb426791bcbcf52d6e50530eed | [
"MIT"
] | 1 | 2018-05-29T16:19:52.000Z | 2018-05-29T16:19:52.000Z | """Tests reordering utility."""
from .reorder import (
DEFAULT_ORDER, default_reordering_hook, make_reordering_hook, unpack_test_ordering
)
| 28.8 | 86 | 0.798611 | 17 | 144 | 6.352941 | 0.764706 | 0.259259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 144 | 4 | 87 | 36 | 0.84375 | 0.173611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
fe1daed109261b5752d19e13f89313211b1f401e | 87 | py | Python | p1_basic/day16_21module/day21/bbbbb/glance2/cmd/__init__.py | dong-pro/fullStackPython | 5ad8662f7b57f14c8529e7eaf64290eeda773557 | [
"Apache-2.0"
] | 1 | 2020-04-03T01:32:05.000Z | 2020-04-03T01:32:05.000Z | p1_basic/day16_21module/day21/bbbbb/glance2/db/__init__.py | dong-pro/fullStackPython | 5ad8662f7b57f14c8529e7eaf64290eeda773557 | [
"Apache-2.0"
] | null | null | null | p1_basic/day16_21module/day21/bbbbb/glance2/db/__init__.py | dong-pro/fullStackPython | 5ad8662f7b57f14c8529e7eaf64290eeda773557 | [
"Apache-2.0"
] | null | null | null | # -*- coding:UTF-8 -*-
# @Time: 2019/9/20 14:11
# @Author: wyd
# @File: __init__.py.py
| 17.4 | 24 | 0.574713 | 15 | 87 | 3.066667 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164384 | 0.16092 | 87 | 4 | 25 | 21.75 | 0.465753 | 0.896552 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a3c33c03c532569f188819ee3bb6fa7c942271e7 | 100 | py | Python | lib/rouge_images.py | kod3000/hc-x1000 | d18401d463f1ffbe5b8683157ddfe88d79d9198a | [
"MIT"
] | 1 | 2021-12-27T05:44:27.000Z | 2021-12-27T05:44:27.000Z | lib/rouge_images.py | kod3000/hc-x1000 | d18401d463f1ffbe5b8683157ddfe88d79d9198a | [
"MIT"
] | null | null | null | lib/rouge_images.py | kod3000/hc-x1000 | d18401d463f1ffbe5b8683157ddfe88d79d9198a | [
"MIT"
] | null | null | null | import numpy
class imageRouge():
def blank(w,h):
return numpy.zeros((w,h,3), numpy.uint8)
| 11.111111 | 42 | 0.65 | 16 | 100 | 4.0625 | 0.75 | 0.061538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024691 | 0.19 | 100 | 8 | 43 | 12.5 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
a3e78d0b9e29ca9ed3feb2ab6398b6bff28c064e | 181 | py | Python | Sem3/Python/iterators/iterator.py | nsudhanva/mca-code | 812348ce53edbe0f42f85a9c362bfc8aad64e1e7 | [
"MIT"
] | null | null | null | Sem3/Python/iterators/iterator.py | nsudhanva/mca-code | 812348ce53edbe0f42f85a9c362bfc8aad64e1e7 | [
"MIT"
] | null | null | null | Sem3/Python/iterators/iterator.py | nsudhanva/mca-code | 812348ce53edbe0f42f85a9c362bfc8aad64e1e7 | [
"MIT"
] | 2 | 2018-10-12T06:38:14.000Z | 2019-01-30T04:38:03.000Z | some_list = [1,23,45,6,6]
my_iter = iter(some_list)
print(next(my_iter))
print(next(my_iter))
print(next(my_iter))
print(my_iter.__next__())
print(my_iter.__next__())
next(my_iter) | 20.111111 | 25 | 0.745856 | 35 | 181 | 3.371429 | 0.285714 | 0.355932 | 0.338983 | 0.381356 | 0.423729 | 0.423729 | 0.423729 | 0.423729 | 0.423729 | 0 | 0 | 0.04142 | 0.066298 | 181 | 9 | 26 | 20.111111 | 0.656805 | 0 | 0 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.625 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
a3efb5150954a4bb99a02f006714da40fe09bf49 | 70 | py | Python | deephyper/benchmark/nas/linearRegMultiVar/__init__.py | Z223I/deephyper | 4fd1054dc22f15197567bdd93c6e7a95a614b8e2 | [
"BSD-3-Clause"
] | 185 | 2018-11-06T18:49:47.000Z | 2022-03-31T22:10:41.000Z | deephyper/benchmark/nas/linearRegMultiVar/__init__.py | Z223I/deephyper | 4fd1054dc22f15197567bdd93c6e7a95a614b8e2 | [
"BSD-3-Clause"
] | 108 | 2018-12-17T17:58:05.000Z | 2022-03-16T10:22:08.000Z | deephyper/benchmark/nas/linearRegMultiVar/__init__.py | Z223I/deephyper | 4fd1054dc22f15197567bdd93c6e7a95a614b8e2 | [
"BSD-3-Clause"
] | 50 | 2018-12-11T20:41:41.000Z | 2022-02-25T19:50:47.000Z | from deephyper.benchmark.nas.linearRegMultiVar.problem import Problem
| 35 | 69 | 0.885714 | 8 | 70 | 7.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 70 | 1 | 70 | 70 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
430fc9f9461220452e6b2d818e97efe756f9b0de | 218 | py | Python | {{cookiecutter.project_name}}/tests/conftest.py | lukemiloszewski/python-template | 18cf3c62bfb8d1129bbfa55e3001ce7bfcb531fc | [
"MIT"
] | 1 | 2022-01-21T08:51:36.000Z | 2022-01-21T08:51:36.000Z | {{cookiecutter.project_name}}/tests/conftest.py | lukemiloszewski/python-template | 18cf3c62bfb8d1129bbfa55e3001ce7bfcb531fc | [
"MIT"
] | 1 | 2022-01-29T06:07:46.000Z | 2022-01-29T06:07:46.000Z | {{cookiecutter.project_name}}/tests/conftest.py | lukemiloszewski/python-template | 18cf3c62bfb8d1129bbfa55e3001ce7bfcb531fc | [
"MIT"
] | null | null | null | """Fixture functions for the test suite."""
import pytest
from typer.testing import CliRunner
@pytest.fixture
def runner() -> CliRunner:
"""Fixture for invoking command-line interfaces."""
return CliRunner()
| 21.8 | 55 | 0.729358 | 26 | 218 | 6.115385 | 0.730769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155963 | 218 | 9 | 56 | 24.222222 | 0.86413 | 0.380734 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
431b0a19fbc6f25d0385606e63e9fa011d43fed1 | 112 | py | Python | examples/2021_12_31/py_files/modules.py | jagarciap/SCSI | 0972548adf17a27b78ef2865a837bf20aadca3e9 | [
"MIT"
] | null | null | null | examples/2021_12_31/py_files/modules.py | jagarciap/SCSI | 0972548adf17a27b78ef2865a837bf20aadca3e9 | [
"MIT"
] | null | null | null | examples/2021_12_31/py_files/modules.py | jagarciap/SCSI | 0972548adf17a27b78ef2865a837bf20aadca3e9 | [
"MIT"
] | 1 | 2022-01-18T10:24:39.000Z | 2022-01-18T10:24:39.000Z | import numpy
import scipy
import vtk
import matplotlib
import evtk.hl
import pandas
import pdb
pdb.set_trace()
| 11.2 | 17 | 0.821429 | 18 | 112 | 5.055556 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 112 | 9 | 18 | 12.444444 | 0.947917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.875 | 0 | 0.875 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
431f74d911245660cd1cf1fe549388d1ba7ebfed | 92 | py | Python | lib/NetworkPerturbations/__init__.py | breecummins/NetworkPerturbations | 54225567032c2660dbbeb424e45c52d9ff484dc8 | [
"MIT"
] | 1 | 2018-12-28T20:57:45.000Z | 2018-12-28T20:57:45.000Z | lib/NetworkPerturbations/__init__.py | breecummins/NetworkPerturbations | 54225567032c2660dbbeb424e45c52d9ff484dc8 | [
"MIT"
] | null | null | null | lib/NetworkPerturbations/__init__.py | breecummins/NetworkPerturbations | 54225567032c2660dbbeb424e45c52d9ff484dc8 | [
"MIT"
] | 1 | 2016-12-23T20:24:49.000Z | 2016-12-23T20:24:49.000Z | from NetworkPerturbations.perturbations import *
from NetworkPerturbations.queries import *
| 30.666667 | 48 | 0.869565 | 8 | 92 | 10 | 0.625 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 92 | 2 | 49 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4321de372f22746ec73cb4f5e3f2b8bb96bd4e69 | 56 | py | Python | katas/kyu_7/number_pairs.py | the-zebulan/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 40 | 2016-03-09T12:26:20.000Z | 2022-03-23T08:44:51.000Z | katas/kyu_7/number_pairs.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | null | null | null | katas/kyu_7/number_pairs.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 36 | 2016-11-07T19:59:58.000Z | 2022-03-31T11:18:27.000Z | def get_larger_numbers(a, b):
return map(max, a, b)
| 18.666667 | 29 | 0.660714 | 11 | 56 | 3.181818 | 0.818182 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196429 | 56 | 2 | 30 | 28 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
43393e9fdbf42cbb56ec242e85dca068e5767656 | 10,462 | py | Python | src/pbn_api/tests/test_client.py | iplweb/django-bpp | 85f183a99d8d5027ae4772efac1e4a9f21675849 | [
"BSD-3-Clause"
] | 1 | 2017-04-27T19:50:02.000Z | 2017-04-27T19:50:02.000Z | src/pbn_api/tests/test_client.py | mpasternak/django-bpp | 434338821d5ad1aaee598f6327151aba0af66f5e | [
"BSD-3-Clause"
] | 41 | 2019-11-07T00:07:02.000Z | 2022-02-27T22:09:39.000Z | src/pbn_api/tests/test_client.py | iplweb/bpp | f027415cc3faf1ca79082bf7bacd4be35b1a6fdf | [
"BSD-3-Clause"
] | null | null | null | import pytest
from fixtures.pbn_api import MOCK_RETURNED_MONGODB_DATA
from pbn_api.adapters.wydawnictwo import WydawnictwoPBNAdapter
from pbn_api.client import (
PBN_DELETE_PUBLICATION_STATEMENT,
PBN_GET_INSTITUTION_STATEMENTS,
PBN_GET_PUBLICATION_BY_ID_URL,
PBN_POST_PUBLICATIONS_URL,
)
from pbn_api.exceptions import (
HttpException,
PKZeroExportDisabled,
SameDataUploadedRecently,
)
from pbn_api.models import SentData
from pbn_api.tests.utils import middleware
from django.contrib.messages import get_messages
from bpp.admin.helpers import sprobuj_wgrac_do_pbn
from bpp.decorators import json
def test_PBNClient_test_upload_publication_nie_trzeba(
pbn_client, pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina
):
pbn_client.transport.return_values[PBN_POST_PUBLICATIONS_URL] = {"objectId": None}
SentData.objects.updated(
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina,
WydawnictwoPBNAdapter(
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina
).pbn_get_json(),
)
with pytest.raises(SameDataUploadedRecently):
pbn_client.upload_publication(pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina)
class PBNTestClientException(Exception):
pass
def test_PBNClient_test_upload_publication_exception(
pbn_client, pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina
):
pbn_client.transport.return_values[
PBN_POST_PUBLICATIONS_URL
] = PBNTestClientException("nei")
with pytest.raises(PBNTestClientException):
pbn_client.upload_publication(pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina)
def test_PBNClient_test_upload_publication_wszystko_ok(
pbn_client, pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina, pbn_publication
):
pbn_client.transport.return_values[PBN_POST_PUBLICATIONS_URL] = {
"objectId": pbn_publication.pk
}
ret, js = pbn_client.upload_publication(
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina
)
assert ret["objectId"] == pbn_publication.pk
def test_sync_publication_to_samo_id(
pbn_client,
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina,
pbn_publication,
pbn_autor,
pbn_jednostka,
):
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid = pbn_publication
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.save()
stare_id = pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id
pbn_client.transport.return_values[PBN_POST_PUBLICATIONS_URL] = {
"objectId": pbn_publication.pk
}
pbn_client.transport.return_values[
PBN_GET_PUBLICATION_BY_ID_URL.format(id=pbn_publication.pk)
] = MOCK_RETURNED_MONGODB_DATA
pbn_client.transport.return_values[
PBN_GET_INSTITUTION_STATEMENTS + "?publicationId=123&size=5120"
] = [
{
"id": "100",
"addedTimestamp": "2020.05.06",
"institutionId": pbn_jednostka.pbn_uid_id,
"personId": pbn_autor.pbn_uid_id,
"publicationId": pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id,
"area": "200",
"inOrcid": True,
"type": "FOOBAR",
}
]
pbn_client.sync_publication(pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina)
pbn_publication.refresh_from_db()
assert pbn_publication.versions[0]["baz"] == "quux"
assert stare_id == pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id
def test_sync_publication_tekstowo_podane_id(
pbn_client, pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina, pbn_publication
):
pbn_client.transport.return_values[PBN_POST_PUBLICATIONS_URL] = {
"objectId": pbn_publication.pk
}
pbn_client.transport.return_values[
PBN_GET_PUBLICATION_BY_ID_URL.format(id=pbn_publication.pk)
] = MOCK_RETURNED_MONGODB_DATA
pbn_client.transport.return_values[
PBN_GET_INSTITUTION_STATEMENTS + "?publicationId=123&size=5120"
] = []
pbn_client.sync_publication(
f"wydawnictwo_zwarte:{pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pk}"
)
pbn_publication.refresh_from_db()
assert pbn_publication.versions[0]["baz"] == "quux"
def test_sync_publication_nowe_id(
pbn_client, pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina, pbn_publication
):
assert pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id is None
stare_id = pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id
pbn_client.transport.return_values[PBN_POST_PUBLICATIONS_URL] = {
"objectId": pbn_publication.pk
}
pbn_client.transport.return_values[
PBN_GET_PUBLICATION_BY_ID_URL.format(id=pbn_publication.pk)
] = MOCK_RETURNED_MONGODB_DATA
pbn_client.transport.return_values[
PBN_GET_INSTITUTION_STATEMENTS + "?publicationId=123&size=5120"
] = []
pbn_client.sync_publication(pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina)
pbn_publication.refresh_from_db()
assert pbn_publication.versions[0]["baz"] == "quux"
assert stare_id != pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id
def test_sync_publication_wysylka_z_zerowym_pk(
pbn_client,
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina,
pbn_publication,
pbn_uczelnia,
):
pbn_uczelnia.pbn_api_nie_wysylaj_prac_bez_pk = True
pbn_uczelnia.save()
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.punkty_kbn = 0
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.save()
pbn_client.transport.return_values[PBN_POST_PUBLICATIONS_URL] = {
"objectId": pbn_publication.pk
}
pbn_client.transport.return_values[
PBN_GET_PUBLICATION_BY_ID_URL.format(id=pbn_publication.pk)
] = MOCK_RETURNED_MONGODB_DATA
pbn_client.transport.return_values[
PBN_GET_INSTITUTION_STATEMENTS + "?publicationId=123&size=5120"
] = []
# To pójdzie
pbn_client.sync_publication(
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina, export_pk_zero=True
)
# To nie pójdzie
with pytest.raises(PKZeroExportDisabled):
pbn_client.sync_publication(
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina, export_pk_zero=False
)
def test_helpers_wysylka_z_zerowym_pk(
rf, pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina, pbn_uczelnia, admin_user
):
pbn_uczelnia.pbn_integracja = (
pbn_uczelnia.pbn_aktualizuj_na_biezaco
) = pbn_uczelnia.pbn_api_nie_wysylaj_prac_bez_pk = True
pbn_uczelnia.save()
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.punkty_kbn = 0
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.save()
req = rf.get("/")
req._uczelnia = pbn_uczelnia
req.user = admin_user
# I jeszcze test z poziomu admina czy parametr z pbn_uczelnia jest przekazywany
with middleware(req):
sprobuj_wgrac_do_pbn(req, pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina)
msg = list(get_messages(req))
assert "wyłączony w konfiguracji" in msg[0].message
def test_sync_publication_kasuj_oswiadczenia_przed_wszystko_dobrze(
pbn_client,
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina,
pbn_publication,
pbn_autor,
pbn_jednostka,
):
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid = pbn_publication
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.save()
stare_id = pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id
pbn_client.transport.return_values[PBN_POST_PUBLICATIONS_URL] = {
"objectId": pbn_publication.pk
}
pbn_client.transport.return_values[
PBN_GET_PUBLICATION_BY_ID_URL.format(id=pbn_publication.pk)
] = MOCK_RETURNED_MONGODB_DATA
pbn_client.transport.return_values[
PBN_DELETE_PUBLICATION_STATEMENT.format(publicationId=pbn_publication.pk)
] = []
pbn_client.transport.return_values[
PBN_GET_INSTITUTION_STATEMENTS + "?publicationId=123&size=5120"
] = [
{
"id": "100",
"addedTimestamp": "2020.05.06",
"institutionId": pbn_jednostka.pbn_uid_id,
"personId": pbn_autor.pbn_uid_id,
"publicationId": pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id,
"area": "200",
"inOrcid": True,
"type": "FOOBAR",
}
]
pbn_client.sync_publication(
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina,
delete_statements_before_upload=True,
)
pbn_publication.refresh_from_db()
assert pbn_publication.versions[0]["baz"] == "quux"
assert stare_id == pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id
def test_sync_publication_kasuj_oswiadczenia_przed_blad_400_nie_zaburzy(
pbn_client,
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina,
pbn_publication,
pbn_autor,
pbn_jednostka,
):
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid = pbn_publication
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.save()
stare_id = pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id
pbn_client.transport.return_values[PBN_POST_PUBLICATIONS_URL] = {
"objectId": pbn_publication.pk
}
pbn_client.transport.return_values[
PBN_GET_PUBLICATION_BY_ID_URL.format(id=pbn_publication.pk)
] = MOCK_RETURNED_MONGODB_DATA
url = PBN_DELETE_PUBLICATION_STATEMENT.format(publicationId=pbn_publication.pk)
err_json = {
"code": 400,
"message": "Bad Request",
"description": "Validation failed.",
"details": {
"publicationId": "Nie można usunąć oświadczeń. Nie istnieją oświadczenia dla publikacji "
"(id = {pbn_publication.pk}) i instytucji (id = XXX)."
},
}
pbn_client.transport.return_values[url] = HttpException(
400, url, json.dumps(err_json)
)
pbn_client.transport.return_values[
PBN_GET_INSTITUTION_STATEMENTS + "?publicationId=123&size=5120"
] = [
{
"id": "100",
"addedTimestamp": "2020.05.06",
"institutionId": pbn_jednostka.pbn_uid_id,
"personId": pbn_autor.pbn_uid_id,
"publicationId": pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id,
"area": "200",
"inOrcid": True,
"type": "FOOBAR",
}
]
pbn_client.sync_publication(
pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina,
delete_statements_before_upload=True,
)
pbn_publication.refresh_from_db()
assert pbn_publication.versions[0]["baz"] == "quux"
assert stare_id == pbn_wydawnictwo_zwarte_z_autorem_z_dyscyplina.pbn_uid_id
| 33.107595 | 101 | 0.744122 | 1,300 | 10,462 | 5.461538 | 0.132308 | 0.110141 | 0.126761 | 0.133099 | 0.790141 | 0.785915 | 0.766901 | 0.741127 | 0.729296 | 0.713944 | 0 | 0.011789 | 0.181132 | 10,462 | 315 | 102 | 33.212698 | 0.816972 | 0.009845 | 0 | 0.615079 | 0 | 0 | 0.079768 | 0.024916 | 0 | 0 | 0 | 0 | 0.047619 | 1 | 0.039683 | false | 0.003968 | 0.039683 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4344c8c7acd571f9081657b161c47ee38ffa192b | 10,815 | py | Python | tests/test_header_overrides.py | desophos/ChromeController | 746f6382afc390c15ff399f28cab75b4588ac98f | [
"BSD-3-Clause"
] | 157 | 2017-01-09T01:12:16.000Z | 2022-03-25T18:27:35.000Z | tests/test_header_overrides.py | desophos/ChromeController | 746f6382afc390c15ff399f28cab75b4588ac98f | [
"BSD-3-Clause"
] | 12 | 2017-03-29T14:47:52.000Z | 2022-01-30T05:36:55.000Z | tests/test_header_overrides.py | desophos/ChromeController | 746f6382afc390c15ff399f28cab75b4588ac98f | [
"BSD-3-Clause"
] | 21 | 2017-05-02T06:12:58.000Z | 2022-03-11T12:01:56.000Z | import unittest
import socket
import json
import base64
import zlib
import gzip
import ChromeController
from http.server import BaseHTTPRequestHandler, HTTPServer
from threading import Thread
from . import testing_server
CHROME_BINARY_NAME = "google-chrome"
class TestChromium(unittest.TestCase):
# def setUp(self):
# self.cr = ChromeController.TabPooledChromium("google-chrome")
# def tearDown(self):
# del self.cr
def fetch_check_headers(self, expect_headers):
try:
# Configure mock server.
self.mock_server_port, self.mock_server, self.mock_server_thread = testing_server.start_server(self, expect_headers)
tgturl = "http://localhost:{}".format(self.mock_server_port)
with ChromeController.ChromeContext(CHROME_BINARY_NAME) as cr:
ret = cr.update_headers(expect_headers)
# print("update_headers return:")
# print(ret)
# print("")
resp = cr.blocking_navigate_and_get_source(tgturl)
self.assertEqual(resp['content'], 'Root OK?')
self.assertEqual(resp['binary'], False)
self.assertEqual(resp['mimetype'], "text/html")
finally:
self.mock_server.shutdown()
def test_basic_fetch_1(self):
self.fetch_check_headers({})
def test_custom_ua_1(self):
'''
Dumb basic check
'''
expect_headers = {
'User-Agent' : r"Test test testy test testttttttt"
}
self.fetch_check_headers(expect_headers)
def test_custom_ua_2(self):
'''
Special chars to see if we can intentionally break something
'''
expect_headers = {
'User-Agent' : r"Test !@#$%^&*(;;);_;+;\\///\\ \"':>?<|}{][;;][[p\\tblah"
}
self.fetch_check_headers(expect_headers)
def test_custom_ua_3(self):
'''
What if it's empty?
'''
expect_headers = {
'User-Agent' : r""
}
self.fetch_check_headers(expect_headers)
def test_custom_ua_4(self):
'''
Or ridiculously long
'''
expect_headers = {
'User-Agent' : r"wat" * 5000
}
self.fetch_check_headers(expect_headers)
def test_custom_ua_5(self):
'''
Or something that looks like a accept header
'''
expect_headers = {
'User-Agent' : r"text/html, application/xhtml+xml, application/xml;q=0.9, */*, */*, */*, */*, */*, */*, */*, */*, */* "
}
self.fetch_check_headers(expect_headers)
def test_custom_lang_1(self):
'''
Normal lang
'''
expect_headers = {
'Accept-Language' : r"en-US,en;q=0.9"
}
self.fetch_check_headers(expect_headers)
def test_custom_lang_2(self):
'''
Special chars
'''
expect_headers = {
'Accept-Language' : r"Test !@#$%^&*(;;);_;+;\\///\\ \"':>?<|}{][;;][[p\\tb ''' \"\" \" lah"
}
self.fetch_check_headers(expect_headers)
def test_custom_lang_3(self):
'''
What if it's empty?
'''
expect_headers = {
'Accept-Language' : r""
}
self.fetch_check_headers(expect_headers)
def test_custom_lang_4(self):
'''
Or ridiculously long
'''
expect_headers = {
'Accept-Language' : r"wat" * 5000
}
self.fetch_check_headers(expect_headers)
def test_custom_lang_5(self):
'''
Or something that looks like a accept header
'''
expect_headers = {
'Accept-Language' : r"text/html, application/xhtml+xml, application/xml;q=0.9, */*, */*, */*, */*, */*, */*, */*, */*, */* "
}
self.fetch_check_headers(expect_headers)
def test_custom_accept_1_1(self):
'''
Normal accept
'''
expect_headers = {
'Accept-Encoding' : r"text/html, application/xhtml+xml, application/xml;q=0.9, */*;q=0.8"
}
self.fetch_check_headers(expect_headers)
def test_custom_accept_1_2(self):
expect_headers = {
'Accept-Encoding' : r"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
}
self.fetch_check_headers(expect_headers)
def test_custom_accept_1_3(self):
expect_headers = {
'Accept-Encoding' : r"text/html,application/xhtml+xml, application/xml;q=0.9, */*;q=0.8"
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_custom_accept_2_1(self):
'''
Normal accept
'''
expect_headers = {
'Accept' : r"text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.8"
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_custom_accept_2_2(self):
expect_headers = {
'Accept' : r"text/html, application/xml;q=0.9,application/xhtml+xml,image/png,image/webp,image/jpeg, image/gif, image/x-xbitmap, */*;q=0.8"
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_custom_accept_2_3(self):
expect_headers = {
'Accept' : r"text/html, application/xml;q=0.9,application/xhtml+xml, image/png,image/webp,image/jpeg, image/gif, image/x-xbitmap, */*;q=0.8"
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_custom_accept_3(self):
'''
Send garbage
'''
expect_headers = {
'Accept' : r"Test !@#$%^&*(;;);_;+;\\///\\ \"':>?<|}{][;;][[p\\tb ''' \"\" \" lah"
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_custom_accept_4(self):
'''
What if it's empty?
'''
expect_headers = {
'Accept' : r""
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_custom_accept_5(self):
'''
Or ridiculously long/repeated
'''
expect_headers = {
'Accept' : r"text/html, " * 5
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_custom_accept_6(self):
'''
I was sending this as a bug at one point
'''
expect_headers = {
'Accept' : r"text/html, application/xhtml+xml, application/xml;q=0.9, */*, */*, */*, */*, */*, */*, */*, */*, */* "
}
self.fetch_check_headers(expect_headers)
def test_custom_encoding_1_1(self):
'''
Normal accept
'''
expect_headers = {
'Accept-Encoding' : r'gzip'
}
self.fetch_check_headers(expect_headers)
def test_custom_encoding_1_2(self):
expect_headers = {
'Accept-Encoding' : r'gzip, deflate'
}
self.fetch_check_headers(expect_headers)
def test_custom_encoding_1_3(self):
expect_headers = {
'Accept-Encoding' : r'deflate, gzip'
}
self.fetch_check_headers(expect_headers)
def test_custom_encoding_1_4(self):
expect_headers = {
'Accept-Encoding' : r'gzip, deflate'
}
self.fetch_check_headers(expect_headers)
def test_custom_encoding_1_5(self):
expect_headers = {
'Accept-Encoding' : r'gzip, deflate, sdch'
}
self.fetch_check_headers(expect_headers)
def test_custom_encoding_1_6(self):
expect_headers = {
'Accept-Encoding' : r'gzip,deflate, sdch'
}
self.fetch_check_headers(expect_headers)
def test_custom_encoding_1_7(self):
expect_headers = {
'Accept-Encoding' : r'sdch, gzip,deflate'
}
self.fetch_check_headers(expect_headers)
def test_custom_encoding_2(self):
'''
Send the wrong header
'''
expect_headers = {
'Accept-Encoding' : r"text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.8"
}
self.fetch_check_headers(expect_headers)
# def test_custom_encoding_3(self):
# '''
# Send garbage
# '''
# expect_headers = {
# 'Accept-Encoding' : r"Test !@#$%^&*(;;);_;+;\\///\\ \"':>?<|}{][;;][[p\\tb ''' \"\" \" lah"
# }
# self.fetch_check_headers(expect_headers)
def test_custom_encoding_4(self):
'''
What if it's empty?
'''
expect_headers = {
'Accept-Encoding' : r""
}
self.fetch_check_headers(expect_headers)
def test_custom_encoding_5(self):
'''
Or ridiculously long/repeated
'''
expect_headers = {
'Accept-Encoding' : r"gzip,deflate, sdch, " * 5
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_setting_referrer_1(self):
'''
Referrers. See:
https://bugs.chromium.org/p/chromium/issues/detail?id=795336
https://bugs.chromium.org/p/chromium/issues/detail?id=767683 ?
'''
expect_headers = {
'Referer' : r"http://www.googlez.com"
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_setting_referrer_2(self):
expect_headers = {
'Referer' : r"htt;ljksdfhglkjshdg!@#$%^&*()_++_)(*&^%$#@!}{\":>?><|{|}{\\][\';//.,1209-82409587p://www.googlez.com"
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_setting_referrer_3(self):
expect_headers = {
'Referer' : r"http://www.googlez.com"*2
}
self.fetch_check_headers(expect_headers)
def test_setting_referrer_4(self):
expect_headers = {
'Referer' : r""
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_setting_host_1(self):
expect_headers = {
'Host' : r""
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_setting_host_2(self):
expect_headers = {
'Host' : r"http://www.googlez.com"
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_setting_host_3(self):
expect_headers = {
'Host' : r"www.googlez.com"
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_setting_host_4(self):
expect_headers = {
'Host' : r"htt;ljksdfhglkjshdg!@#$%^&*()_++_)(*&^%$#@!}{\":>?><|{|}{\\][\';//.,1209-82409587p://www.googlez.com"
}
self.fetch_check_headers(expect_headers)
@unittest.expectedFailure
def test_setting_host_5(self):
expect_headers = {
'Host' : r"www.googlez.com"*50
}
self.fetch_check_headers(expect_headers)
def test_setting_random_1(self):
expect_headers = {
'Pineapple' : r"Banana"
}
self.fetch_check_headers(expect_headers)
def test_setting_random_2(self):
expect_headers = {
'Pineapple'*5 : r"Banana"*5
}
self.fetch_check_headers(expect_headers)
def test_setting_random_3(self):
expect_headers = {
'Pineapple'*500 : r"Banana"*500
}
self.fetch_check_headers(expect_headers)
def test_setting_random_4(self):
expect_headers = {
'Pineapple' : r"htt;ljksdfhglkjshdg!@#$%^&*()_++_)(*&^%$#@!}{\":>?><|{|}{\\][\';//.,1209-82409587p://www.googlez.com"
}
self.fetch_check_headers(expect_headers)
# def test_setting_random_5(self):
# expect_headers = {
# 'Pineapple;Pineapple' : r"htt;ljksdfhglkjshdg!@#$%^&*()_++_)(*&^%$#@!}{\":>?><|{|}{\\][\';//.,1209-82409587p://www.googlez.com"
# }
# self.fetch_check_headers(expect_headers)
# def test_setting_random_6(self):
# expect_headers = {
# 'Pineapple=Pineapple' : r"htt;ljksdfhglkjshdg!@#$%^&*()_++_)(*&^%$#@!}{\":>?><|{|}{\\][\';//.,1209-82409587p://www.googlez.com"
# }
# self.fetch_check_headers(expect_headers)
# def test_setting_random_7(self):
# expect_headers = {
# 'Pineapple=Pineapplehtt;ljksdfhglkjshdg!@#$%^&*()_++_)(*&^%$#@!}{\":>?><|{|}{\\][\';//.,1209-82409587p://www.googlez.com'
# : r"htt;ljksdfhglkjshdg!@#$%^&*()_++_)(*&^%$#@!}{\":>?><|{|}{\\][\';//.,1209-82409587p://www.googlez.com"
# }
# self.fetch_check_headers(expect_headers)
| 25.997596 | 155 | 0.672769 | 1,378 | 10,815 | 4.996372 | 0.12627 | 0.183152 | 0.120988 | 0.146405 | 0.826144 | 0.77313 | 0.753377 | 0.727378 | 0.698039 | 0.653885 | 0 | 0.02342 | 0.147203 | 10,815 | 415 | 156 | 26.060241 | 0.723084 | 0.1681 | 0 | 0.433594 | 0 | 0.039063 | 0.241142 | 0.085197 | 0 | 0 | 0 | 0 | 0.011719 | 1 | 0.175781 | false | 0 | 0.039063 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4a55cc879ca881ffba58530dcd23b36502a365de | 5,624 | py | Python | pyscf/pbc/mp/test/test_kpoint_stagger.py | xinxing02/pyscf | 30aeafc408aa87ac1fae6aaa6a42e195b5a1dc0a | [
"Apache-2.0"
] | null | null | null | pyscf/pbc/mp/test/test_kpoint_stagger.py | xinxing02/pyscf | 30aeafc408aa87ac1fae6aaa6a42e195b5a1dc0a | [
"Apache-2.0"
] | null | null | null | pyscf/pbc/mp/test/test_kpoint_stagger.py | xinxing02/pyscf | 30aeafc408aa87ac1fae6aaa6a42e195b5a1dc0a | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
'''
Test code for
k-point spin-restricted periodic MP2 calculation using the staggered mesh method
Author: Xin Xing (xxing@berkeley.edu)
Reference: Staggered Mesh Method for Correlation Energy Calculations of Solids: Second-Order
Møller–Plesset Perturbation Theory, J. Chem. Theory Comput. 2021, 17, 8, 4733-4745
'''
import unittest
import numpy as np
from pyscf.pbc import gto as pbcgto
from pyscf.pbc import scf as pbcscf
from pyscf.pbc import df
from pyscf.pbc.mp.kmp2_stagger import KMP2_stagger
def run_kcell_fftdf(cell, nk):
abs_kpts = cell.make_kpts(nk, wrap_around=True)
kmf = pbcscf.KRHF(cell, abs_kpts)
kmf.conv_tol = 1e-12
emf = kmf.scf()
emp2_sub = KMP2_stagger(kmf, flag_submesh=True).run()
emp2_ext = KMP2_stagger(kmf, flag_submesh=False).run()
return emf, emp2_sub.e_corr, emp2_ext.e_corr
def run_kcell_gdf(cell, nk):
abs_kpts = cell.make_kpts(nk, wrap_around=True)
kmf = pbcscf.KRHF(cell, abs_kpts)
gdf = df.GDF(cell, abs_kpts).build()
kmf.with_df = gdf
kmf.conv_tol = 1e-12
emf = kmf.scf()
emp2_sub = KMP2_stagger(kmf, flag_submesh=True).run()
emp2_ext = KMP2_stagger(kmf, flag_submesh=False).run()
return emf, emp2_sub.e_corr, emp2_ext.e_corr
def run_kcell_complex_fftdf(cell, nk):
abs_kpts = cell.make_kpts(nk, wrap_around=True)
kmf = pbcscf.KRHF(cell, abs_kpts)
kmf.conv_tol = 1e-12
emf = kmf.scf()
kmf.mo_coeff = [kmf.mo_coeff[i].astype(np.complex128) for i in range(np.prod(nk))]
emp2_sub = KMP2_stagger(kmf, flag_submesh=True).run()
emp2_ext = KMP2_stagger(kmf, flag_submesh=False).run()
return emf, emp2_sub.e_corr, emp2_ext.e_corr
def run_kcell_complex_gdf(cell, nk):
abs_kpts = cell.make_kpts(nk, wrap_around=True)
kmf = pbcscf.KRHF(cell, abs_kpts)
gdf = df.GDF(cell, abs_kpts).build()
kmf.with_df = gdf
kmf.conv_tol = 1e-12
emf = kmf.scf()
kmf.mo_coeff = [kmf.mo_coeff[i].astype(np.complex128) for i in range(np.prod(nk))]
emp2_sub = KMP2_stagger(kmf, flag_submesh=True).run()
emp2_ext = KMP2_stagger(kmf, flag_submesh=False).run()
return emf, emp2_sub.e_corr, emp2_ext.e_corr
class KnownValues(unittest.TestCase):
def test_222_h2_fftdf(self):
cell = pbcgto.Cell()
cell.atom='''
H 3.00 3.00 2.10
H 3.00 3.00 3.90
'''
cell.a = '''
6.0 0.0 0.0
0.0 6.0 0.0
0.0 0.0 6.0
'''
cell.unit = 'B'
cell.pseudo = 'gth-pade'
cell.basis = 'gth-szv'
cell.verbose = 4
cell.build()
nk = [2,2,2]
emf, emp2_sub, emp2_ext = run_kcell_fftdf(cell,nk)
self.assertAlmostEqual(emf, -1.10046681450171, 9)
self.assertAlmostEqual(emp2_sub, -0.0160900371069261, 9)
self.assertAlmostEqual(emp2_ext, -0.0140288251933276, 9)
emf, emp2_sub, emp2_ext = run_kcell_complex_fftdf(cell,nk)
self.assertAlmostEqual(emp2_sub, -0.0160900371069261, 9)
self.assertAlmostEqual(emp2_ext, -0.0140288251933276, 9)
def test_222_h2_gdf(self):
cell = pbcgto.Cell()
cell.atom='''
H 3.00 3.00 2.10
H 3.00 3.00 3.90
'''
cell.a = '''
6.0 0.0 0.0
0.0 6.0 0.0
0.0 0.0 6.0
'''
cell.unit = 'B'
cell.pseudo = 'gth-pade'
cell.basis = 'gth-szv'
cell.verbose = 4
cell.build()
nk = [2,2,2]
emf, emp2_sub, emp2_ext = run_kcell_gdf(cell,nk)
self.assertAlmostEqual(emf, -1.10186079943922, 9)
self.assertAlmostEqual(emp2_sub, -0.0158364523431077, 9)
self.assertAlmostEqual(emp2_ext, -0.0140278627430396, 9)
emf, emp2_sub, emp2_ext = run_kcell_complex_gdf(cell,nk)
self.assertAlmostEqual(emp2_sub, -0.0158364523431077, 9)
self.assertAlmostEqual(emp2_ext, -0.0140278627430396, 9)
def test_222_diamond_frozen(self):
cell = pbcgto.Cell()
cell.pseudo = 'gth-pade'
cell.basis = 'gth-szv'
cell.ke_cutoff=100
cell.atom='''
C 0. 0. 0.
C 1.26349729, 0.7294805 , 0.51582061
'''
cell.a = '''
2.52699457, 0. , 0.
1.26349729, 2.18844149, 0.
1.26349729, 0.7294805 , 2.06328243
'''
cell.unit = 'angstrom'
cell.verbose = 4
cell.build()
nk = [2,2,2]
abs_kpts = cell.make_kpts(nk, wrap_around=True)
# FFTDF-based calculation
kmf = pbcscf.KRHF(cell, abs_kpts)
kmf.conv_tol = 1e-12
kmf.scf()
emp2_sub = KMP2_stagger(kmf, flag_submesh=True, frozen=[0,1,2]).run()
emp2_ext = KMP2_stagger(kmf, flag_submesh=False, frozen=[0,1,2]).run()
self.assertAlmostEqual(emp2_sub.e_corr, -0.0254955913664726, 9)
self.assertAlmostEqual(emp2_ext.e_corr, -0.0126977970896905, 9)
# GDF-based calculation
kmf = pbcscf.KRHF(cell, abs_kpts)
gdf = df.GDF(cell, abs_kpts).build()
kmf.with_df = gdf
kmf.conv_tol = 1e-12
kmf.scf()
emp2_sub = KMP2_stagger(kmf, flag_submesh=True, frozen=[0,1,2]).run()
emp2_ext = KMP2_stagger(kmf, flag_submesh=False, frozen=[0,1,2]).run()
self.assertAlmostEqual(emp2_sub.e_corr, -0.0252835750365586, 9)
self.assertAlmostEqual(emp2_ext.e_corr, -0.0126846178079962, 9)
if __name__ == '__main__':
print("Staggered KMP2 energy calculation test")
unittest.main()
| 33.278107 | 92 | 0.614509 | 829 | 5,624 | 3.985525 | 0.190591 | 0.016344 | 0.019068 | 0.01937 | 0.75908 | 0.748789 | 0.726998 | 0.723366 | 0.692494 | 0.643765 | 0 | 0.129701 | 0.262447 | 5,624 | 168 | 93 | 33.47619 | 0.666586 | 0.068812 | 0 | 0.740458 | 0 | 0 | 0.132798 | 0 | 0 | 0 | 0 | 0 | 0.10687 | 1 | 0.053435 | false | 0 | 0.045802 | 0 | 0.137405 | 0.007634 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4a6b2b38c68ff197b3e96b791c0accf04833eb8f | 38 | py | Python | tests/__init__.py | adsharma/zre_raft | 35832f251862ca03ad7201055a417d2d8de453be | [
"MIT"
] | 1 | 2021-01-14T06:43:36.000Z | 2021-01-14T06:43:36.000Z | tests/__init__.py | adsharma/zre_raft | 35832f251862ca03ad7201055a417d2d8de453be | [
"MIT"
] | 2 | 2021-01-11T21:04:13.000Z | 2021-01-11T21:05:29.000Z | tests/__init__.py | adsharma/zre_raft | 35832f251862ca03ad7201055a417d2d8de453be | [
"MIT"
] | 1 | 2022-02-16T03:33:34.000Z | 2022-02-16T03:33:34.000Z | """Unit test package for zre_raft."""
| 19 | 37 | 0.684211 | 6 | 38 | 4.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 1 | 38 | 38 | 0.757576 | 0.815789 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4a85372624b1aa7343b40bbd821dfe3fa4cdd384 | 5,884 | py | Python | scripts/generate_bmeg_file_manifest.py | bmeg/bmeg-etl | 3efa28a7775d6defd77457838e92817a2fbc9e99 | [
"MIT"
] | 1 | 2022-03-08T22:06:35.000Z | 2022-03-08T22:06:35.000Z | scripts/generate_bmeg_file_manifest.py | bmeg/bmeg-etl | 3efa28a7775d6defd77457838e92817a2fbc9e99 | [
"MIT"
] | 191 | 2018-07-09T20:49:34.000Z | 2021-02-09T18:44:28.000Z | scripts/generate_bmeg_file_manifest.py | bmeg/bmeg-etl | 3efa28a7775d6defd77457838e92817a2fbc9e99 | [
"MIT"
] | null | null | null | import glob
import os
import subprocess
import shlex
import yaml
files = glob.glob("outputs/**/*.dvc")
EXCEPTIONS = [
# unnormalized Compounds
"outputs/pharmacodb/Compound.Vertex.json.gz",
"outputs/g2p/Compound.Vertex.json.gz",
"outputs/gdc/gdc.Compound.Vertex.json.gz",
"outputs/dgidb/Compound.Vertex.json.gz",
# unnormalized Compound edges
"outputs/pharmacodb/DrugResponse_Compounds_Compound.Edge.json.gz",
"outputs/pharmacodb/Compound_DrugResponses_DrugResponse.Edge.json.gz",
"outputs/pharmacodb/Project_Compounds_Compound.Edge.json.gz",
"outputs/pharmacodb/Compound_Projects_Project.Edge.json.gz",
"outputs/g2p/G2PAssociation_Compounds_Compound.Edge.json.gz",
"outputs/g2p/Compound_G2PAssociations_G2PAssociation.Edge.json.gz",
"outputs/gdc/gdc.Case_Compounds_Compound.Edge.json.gz",
"outputs/gdc/gdc.Compound_Cases_Case.Edge.json.gz",
"outputs/gdc/gdc.Compound_Projects_Project.Edge.json.gz",
"outputs/gdc/gdc.Project_Compounds_Compound.Edge.json.gz",
"outputs/dgidb/G2PAssociation_Compounds_Compound.Edge.json.gz",
"outputs/dgidb/Compound_G2PAssociations_G2PAssociation.Edge.json.gz",
# unnormalized Phenotypes
"outputs/ccle/ccle.Phenotype.Vertex.json.gz",
"outputs/ctrp/ctrp.Phenotype.Vertex.json.gz",
"outputs/gdsc/gdsc.Phenotype.Vertex.json.gz",
"outputs/g2p/Phenotype.Vertex.json.gz",
"outputs/gdc/Phenotype.Vertex.json.gz",
# unnormalized Phenotype edges
"outputs/ccle/ccle.Case_Phenotypes_Phenotype.Edge.json.gz",
"outputs/ccle/ccle.Phenotype_Cases_Case.Edge.json.gz",
"outputs/ccle/ccle.Sample_Phenotypes_Phenotype.Edge.json.gz",
"outputs/ccle/ccle.Phenotype_Samples_Sample.Edge.json.gz",
"outputs/ctrp/ctrp.Case_Phenotypes_Phenotype.Edge.json.gz",
"outputs/ctrp/ctrp.Phenotype_Cases_Case.Edge.json.gz",
"outputs/ctrp/ctrp.Sample_Phenotypes_Phenotype.Edge.json.gz",
"outputs/ctrp/ctrp.Phenotype_Samples_Sample.Edge.json.gz",
"outputs/gdsc/gdsc.Case_Phenotypes_Phenotype.Edge.json.gz",
"outputs/gdsc/gdsc.Phenotype_Cases_Case.Edge.json.gz",
"outputs/gdsc/gdsc.Sample_Phenotypes_Phenotype.Edge.json.gz",
"outputs/gdsc/gdsc.Phenotype_Samples_Sample.Edge.json.gz",
"outputs/g2p/G2PAssociation_Phenotypes_Phenotype.Edge.json.gz",
"outputs/g2p/Phenotype_G2PAssociations_G2PAssociation.Edge.json.gz",
"outputs/gdc/Case_Phenotypes_Phenotype.Edge.json.gz",
"outputs/gdc/Phenotype_Cases_Case.Edge.json.gz",
"outputs/gdc/Sample_Phenotypes_Phenotype.Edge.json.gz",
"outputs/gdc/Phenotype_Samples_Sample.Edge.json.gz",
# Deadletter
"outputs/g2p/Deadletter.Vertex.json.gz",
"outputs/mc3/Deadletter.Vertex.json.gz",
# unnormalized Allele
"outputs/ccle/maf.Allele.Vertex.json.gz",
"outputs/g2p/Allele.Vertex.json.gz",
"outputs/mc3/Allele.Vertex.json.gz",
"outputs/gdsc/caveman.Allele.Vertex.json.gz",
"outputs/gdsc/pindel.Allele.Vertex.json.gz",
# unnormalized Allele <-> SomaticCallset edges
"outputs/ccle/maf.Allele_SomaticCallsets_SomaticCallset.Edge.json.gz",
"outputs/ccle/maf.SomaticCallset_Alleles_Allele.Edge.json.gz",
"outputs/mc3/Allele_SomaticCallsets_SomaticCallset.Edge.json.gz",
"outputs/mc3/SomaticCallset_Alleles_Allele.Edge.json.gz",
"outputs/gdsc/caveman.Allele_SomaticCallsets_SomaticCallset.Edge.json.gz",
"outputs/gdsc/caveman.SomaticCallset_Alleles_Allele.Edge.json.gz",
"outputs/gdsc/pindel.Allele_SomaticCallsets_SomaticCallset.Edge.json.gz",
"outputs/gdsc/pindel.SomaticCallset_Alleles_Allele.Edge.json.gz",
# Meta files
"outputs/meta/Command.Vertex.json.gz",
"outputs/meta/File.Vertex.json.gz",
"outputs/meta/Command_Reads_File.json.gz",
"outputs/meta/File_InputTo_Command.json.gz",
"outputs/meta/Command_Writes_File.Edge.json.gz",
"outputs/meta/File_CreatedBy_Command.json.gz",
"outputs/meta/bmeg_file_manifest.txt",
# Methylation
"outputs/tcga/IlluminaHumanMethylation450.Methylation.Vertex.json.gz",
"outputs/tcga/IlluminaHumanMethylation450.MethylationProbe.Vertex.json.gz",
"outputs/tcga/IlluminaHumanMethylation450.Aliquot_Methylations_Methylation.Edge.json.gz",
"outputs/tcga/IlluminaHumanMethylation450.Methylation_Aliquot_Aliquot.Edge.json.gz",
"outputs/tcga/IlluminaHumanMethylation450.MethylationProbe_Gene_Gene.Edge.json.gz",
"outputs/tcga/IlluminaHumanMethylation450.Gene_MethylationProbes_MethylationProbe.Edge.json.gz"
]
print("generating DVC command...")
DVC_CMD = "dvc run --file outputs.bmeg_manifest.dvc --yes --ignore-build-cache"
outputs = []
for f in files:
with open(f, "r") as stream:
dvc = yaml.safe_load(stream)
if "outs" not in dvc:
print(f, "has no outputs...")
continue
for d in dvc["outs"]:
if d["path"] in EXCEPTIONS:
print("excluding {}...".format(d["path"]))
continue
if os.path.isfile(d["path"]):
outputs.append(d["path"])
elif os.path.isdir(d["path"]):
ofiles = glob.glob(os.path.join(d["path"], "**", "*.Vertex.json.gz"), recursive=True) + glob.glob(os.path.join(d["path"], "**", "*.Edge.json.gz"), recursive=True)
for of in ofiles:
if of in EXCEPTIONS:
print("excluding {}...".format(of))
continue
outputs.append(of)
final_outputs = []
for o in sorted(set(outputs)):
if not (o.endswith(".Edge.json.gz") or o.endswith(".Vertex.json.gz")):
print("excluding {}...".format(o))
continue
DVC_CMD += " -d {}".format(o)
final_outputs.append(o)
DVC_CMD += ' "echo generating file manifest..."'
args = shlex.split(DVC_CMD)
subprocess.call(args)
with open('bmeg_file_manifest.txt', 'w+') as fobj:
fobj.write('\n'.join(final_outputs))
| 46.330709 | 178 | 0.71601 | 743 | 5,884 | 5.528937 | 0.154778 | 0.10224 | 0.183544 | 0.161392 | 0.685005 | 0.546982 | 0.375852 | 0.176728 | 0.07741 | 0 | 0 | 0.007712 | 0.140551 | 5,884 | 126 | 179 | 46.698413 | 0.804627 | 0.03433 | 0 | 0.036697 | 0 | 0 | 0.685473 | 0.63452 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.055046 | 0.045872 | 0 | 0.045872 | 0.045872 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
4a879eb4c80996528ff96ca6aa6aab128d14dfad | 230 | py | Python | protonfixes/gamefixes/684450.py | NoXPhasma/protonfixes | c0f6b523c6e371c99156fb3fd4d04b49dfcbf1ab | [
"BSD-2-Clause"
] | null | null | null | protonfixes/gamefixes/684450.py | NoXPhasma/protonfixes | c0f6b523c6e371c99156fb3fd4d04b49dfcbf1ab | [
"BSD-2-Clause"
] | null | null | null | protonfixes/gamefixes/684450.py | NoXPhasma/protonfixes | c0f6b523c6e371c99156fb3fd4d04b49dfcbf1ab | [
"BSD-2-Clause"
] | null | null | null | """ Game fix for Surviving the Aftermath
"""
#pylint: disable=C0103
from protonfixes import util
def main():
""" Launcher currently broken
"""
util.replace_command("launcher/Paradox Launcher.exe", "Aftermath64.exe")
| 20.909091 | 76 | 0.708696 | 27 | 230 | 6 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.165217 | 230 | 10 | 77 | 23 | 0.8125 | 0.386957 | 0 | 0 | 0 | 0 | 0.346457 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
434afd46ee5eab8f3db6d2027d3382f32310651f | 606 | py | Python | class4/assign3.py | jon-burgess-execulink/pynet-class | cacb3c147304982704e58ea1275adeea1591bcb2 | [
"Apache-2.0"
] | null | null | null | class4/assign3.py | jon-burgess-execulink/pynet-class | cacb3c147304982704e58ea1275adeea1591bcb2 | [
"Apache-2.0"
] | null | null | null | class4/assign3.py | jon-burgess-execulink/pynet-class | cacb3c147304982704e58ea1275adeea1591bcb2 | [
"Apache-2.0"
] | null | null | null | import pexpect
import sys
pynet_rtr2_ip_addr = "184.105.247.71"
pynet_rtr2_username = 'pyclass'
pynet_rtr2_password = '88newclass'
pynet_rtr2_connection = pexpect.spawn('ssh -l {} {}'.format(pynet_rtr2_username, pynet_rtr2_ip_addr))
#pynet_rtr2_connection.logfile = sys.stdout
pynet_rtr2_connection.timeout = 10
pynet_rtr2_connection.expect('ssword:')
pynet_rtr2_connection.sendline(pynet_rtr2_password)
pynet_rtr2_connection.expect("pynet-rtr2#")
pynet_rtr2_connection.sendline("show ip interface brief")
pynet_rtr2_connection.expect("pynet-rtr2#")
print pynet_rtr2_connection.before
| 33.666667 | 102 | 0.80363 | 83 | 606 | 5.481928 | 0.385542 | 0.336264 | 0.375824 | 0.164835 | 0.149451 | 0.149451 | 0 | 0 | 0 | 0 | 0 | 0.058288 | 0.094059 | 606 | 17 | 103 | 35.647059 | 0.770492 | 0.069307 | 0 | 0.153846 | 0 | 0 | 0.174312 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.153846 | 0.153846 | null | null | 0.076923 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
434cad6ccbe2dbdcbf2304ab88ba5950e82792bc | 51 | py | Python | markdown_it/extensions/myst_blocks/__init__.py | iooxa/markdown-it-py | 21837dfa0ce9be249de372bb10733a534f8e0a50 | [
"MIT"
] | 32 | 2021-05-20T04:11:11.000Z | 2022-03-15T09:33:42.000Z | markdown_it/extensions/myst_blocks/__init__.py | iooxa/markdown-it-py | 21837dfa0ce9be249de372bb10733a534f8e0a50 | [
"MIT"
] | 41 | 2020-12-14T18:58:51.000Z | 2022-03-02T14:19:43.000Z | markdown_it/extensions/myst_blocks/__init__.py | iooxa/markdown-it-py | 21837dfa0ce9be249de372bb10733a534f8e0a50 | [
"MIT"
] | 12 | 2020-12-14T21:49:37.000Z | 2022-02-08T13:21:29.000Z | from .index import myst_block_plugin # noqa: F401
| 25.5 | 50 | 0.784314 | 8 | 51 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 0.156863 | 51 | 1 | 51 | 51 | 0.813953 | 0.196078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4391972ba5ff1f4c614a1cfcc0a8f172f14358fb | 82,408 | py | Python | flink-ai-flow/ai_flow/protobuf/metadata_service_pb2_grpc.py | LJMichale/flink-ai-extended | efda4ad801571a155970e3a9f42797fc0ee90c84 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0",
"BSD-3-Clause"
] | 1 | 2021-08-06T04:24:36.000Z | 2021-08-06T04:24:36.000Z | flink-ai-flow/ai_flow/protobuf/metadata_service_pb2_grpc.py | LJMichale/flink-ai-extended | efda4ad801571a155970e3a9f42797fc0ee90c84 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0",
"BSD-3-Clause"
] | null | null | null | flink-ai-flow/ai_flow/protobuf/metadata_service_pb2_grpc.py | LJMichale/flink-ai-extended | efda4ad801571a155970e3a9f42797fc0ee90c84 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0",
"BSD-3-Clause"
] | 1 | 2021-05-20T02:17:11.000Z | 2021-05-20T02:17:11.000Z | #
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from . import message_pb2 as message__pb2
from . import metadata_service_pb2 as metadata__service__pb2
class MetadataServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.getDatasetById = channel.unary_unary(
'/ai_flow.MetadataService/getDatasetById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getDatasetByName = channel.unary_unary(
'/ai_flow.MetadataService/getDatasetByName',
request_serializer=metadata__service__pb2.NameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.listDatasets = channel.unary_unary(
'/ai_flow.MetadataService/listDatasets',
request_serializer=metadata__service__pb2.ListRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.registerDataset = channel.unary_unary(
'/ai_flow.MetadataService/registerDataset',
request_serializer=metadata__service__pb2.RegisterDatasetRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.registerDatasetWithCatalog = channel.unary_unary(
'/ai_flow.MetadataService/registerDatasetWithCatalog',
request_serializer=metadata__service__pb2.RegisterDatasetRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.registerDatasets = channel.unary_unary(
'/ai_flow.MetadataService/registerDatasets',
request_serializer=metadata__service__pb2.RegisterDatasetsRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.updateDataset = channel.unary_unary(
'/ai_flow.MetadataService/updateDataset',
request_serializer=metadata__service__pb2.UpdateDatasetRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteDatasetById = channel.unary_unary(
'/ai_flow.MetadataService/deleteDatasetById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteDatasetByName = channel.unary_unary(
'/ai_flow.MetadataService/deleteDatasetByName',
request_serializer=metadata__service__pb2.NameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getModelRelationById = channel.unary_unary(
'/ai_flow.MetadataService/getModelRelationById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getModelRelationByName = channel.unary_unary(
'/ai_flow.MetadataService/getModelRelationByName',
request_serializer=metadata__service__pb2.NameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.listModelRelation = channel.unary_unary(
'/ai_flow.MetadataService/listModelRelation',
request_serializer=metadata__service__pb2.ListRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.registerModelRelation = channel.unary_unary(
'/ai_flow.MetadataService/registerModelRelation',
request_serializer=metadata__service__pb2.RegisterModelRelationRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteModelRelationById = channel.unary_unary(
'/ai_flow.MetadataService/deleteModelRelationById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteModelRelationByName = channel.unary_unary(
'/ai_flow.MetadataService/deleteModelRelationByName',
request_serializer=metadata__service__pb2.NameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getModelById = channel.unary_unary(
'/ai_flow.MetadataService/getModelById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getModelByName = channel.unary_unary(
'/ai_flow.MetadataService/getModelByName',
request_serializer=metadata__service__pb2.NameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.registerModel = channel.unary_unary(
'/ai_flow.MetadataService/registerModel',
request_serializer=metadata__service__pb2.RegisterModelRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteModelById = channel.unary_unary(
'/ai_flow.MetadataService/deleteModelById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteModelByName = channel.unary_unary(
'/ai_flow.MetadataService/deleteModelByName',
request_serializer=metadata__service__pb2.NameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getModelVersionRelationByVersion = channel.unary_unary(
'/ai_flow.MetadataService/getModelVersionRelationByVersion',
request_serializer=metadata__service__pb2.ModelVersionNameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.listModelVersionRelation = channel.unary_unary(
'/ai_flow.MetadataService/listModelVersionRelation',
request_serializer=metadata__service__pb2.ListModelVersionRelationRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.registerModelVersionRelation = channel.unary_unary(
'/ai_flow.MetadataService/registerModelVersionRelation',
request_serializer=metadata__service__pb2.RegisterModelVersionRelationRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteModelVersionRelationByVersion = channel.unary_unary(
'/ai_flow.MetadataService/deleteModelVersionRelationByVersion',
request_serializer=metadata__service__pb2.ModelVersionNameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getModelVersionByVersion = channel.unary_unary(
'/ai_flow.MetadataService/getModelVersionByVersion',
request_serializer=metadata__service__pb2.ModelVersionNameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.registerModelVersion = channel.unary_unary(
'/ai_flow.MetadataService/registerModelVersion',
request_serializer=metadata__service__pb2.RegisterModelVersionRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteModelVersionByVersion = channel.unary_unary(
'/ai_flow.MetadataService/deleteModelVersionByVersion',
request_serializer=metadata__service__pb2.ModelVersionNameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getDeployedModelVersion = channel.unary_unary(
'/ai_flow.MetadataService/getDeployedModelVersion',
request_serializer=metadata__service__pb2.ModelNameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getLatestValidatedModelVersion = channel.unary_unary(
'/ai_flow.MetadataService/getLatestValidatedModelVersion',
request_serializer=metadata__service__pb2.ModelNameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getLatestGeneratedModelVersion = channel.unary_unary(
'/ai_flow.MetadataService/getLatestGeneratedModelVersion',
request_serializer=metadata__service__pb2.ModelNameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getProjectById = channel.unary_unary(
'/ai_flow.MetadataService/getProjectById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getProjectByName = channel.unary_unary(
'/ai_flow.MetadataService/getProjectByName',
request_serializer=metadata__service__pb2.NameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.registerProject = channel.unary_unary(
'/ai_flow.MetadataService/registerProject',
request_serializer=metadata__service__pb2.RegisterProjectRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.updateProject = channel.unary_unary(
'/ai_flow.MetadataService/updateProject',
request_serializer=metadata__service__pb2.UpdateProjectRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.listProject = channel.unary_unary(
'/ai_flow.MetadataService/listProject',
request_serializer=metadata__service__pb2.ListRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteProjectById = channel.unary_unary(
'/ai_flow.MetadataService/deleteProjectById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteProjectByName = channel.unary_unary(
'/ai_flow.MetadataService/deleteProjectByName',
request_serializer=metadata__service__pb2.NameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getArtifactById = channel.unary_unary(
'/ai_flow.MetadataService/getArtifactById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getArtifactByName = channel.unary_unary(
'/ai_flow.MetadataService/getArtifactByName',
request_serializer=metadata__service__pb2.NameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.updateArtifact = channel.unary_unary(
'/ai_flow.MetadataService/updateArtifact',
request_serializer=metadata__service__pb2.UpdateArtifactRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.registerArtifact = channel.unary_unary(
'/ai_flow.MetadataService/registerArtifact',
request_serializer=metadata__service__pb2.RegisterArtifactRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.listArtifact = channel.unary_unary(
'/ai_flow.MetadataService/listArtifact',
request_serializer=metadata__service__pb2.ListRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteArtifactById = channel.unary_unary(
'/ai_flow.MetadataService/deleteArtifactById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteArtifactByName = channel.unary_unary(
'/ai_flow.MetadataService/deleteArtifactByName',
request_serializer=metadata__service__pb2.NameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.registerWorkflow = channel.unary_unary(
'/ai_flow.MetadataService/registerWorkflow',
request_serializer=metadata__service__pb2.RegisterWorkflowRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.updateWorkflow = channel.unary_unary(
'/ai_flow.MetadataService/updateWorkflow',
request_serializer=metadata__service__pb2.UpdateWorkflowRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getWorkflowById = channel.unary_unary(
'/ai_flow.MetadataService/getWorkflowById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.getWorkflowByName = channel.unary_unary(
'/ai_flow.MetadataService/getWorkflowByName',
request_serializer=metadata__service__pb2.WorkflowNameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteWorkflowById = channel.unary_unary(
'/ai_flow.MetadataService/deleteWorkflowById',
request_serializer=metadata__service__pb2.IdRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.deleteWorkflowByName = channel.unary_unary(
'/ai_flow.MetadataService/deleteWorkflowByName',
request_serializer=metadata__service__pb2.WorkflowNameRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
self.listWorkflows = channel.unary_unary(
'/ai_flow.MetadataService/listWorkflows',
request_serializer=metadata__service__pb2.ListWorkflowsRequest.SerializeToString,
response_deserializer=message__pb2.Response.FromString,
)
class MetadataServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def getDatasetById(self, request, context):
"""dataset api
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getDatasetByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def listDatasets(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def registerDataset(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def registerDatasetWithCatalog(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def registerDatasets(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def updateDataset(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteDatasetById(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteDatasetByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getModelRelationById(self, request, context):
"""model relation api
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getModelRelationByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def listModelRelation(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def registerModelRelation(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteModelRelationById(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteModelRelationByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getModelById(self, request, context):
"""model api
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getModelByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def registerModel(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteModelById(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteModelByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getModelVersionRelationByVersion(self, request, context):
"""model version relation api
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def listModelVersionRelation(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def registerModelVersionRelation(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteModelVersionRelationByVersion(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getModelVersionByVersion(self, request, context):
"""model version api
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def registerModelVersion(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteModelVersionByVersion(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getDeployedModelVersion(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getLatestValidatedModelVersion(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getLatestGeneratedModelVersion(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getProjectById(self, request, context):
"""project api
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getProjectByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def registerProject(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def updateProject(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def listProject(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteProjectById(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteProjectByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getArtifactById(self, request, context):
"""artifact api
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getArtifactByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def updateArtifact(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def registerArtifact(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def listArtifact(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteArtifactById(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteArtifactByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def registerWorkflow(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def updateWorkflow(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getWorkflowById(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def getWorkflowByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteWorkflowById(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def deleteWorkflowByName(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def listWorkflows(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_MetadataServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'getDatasetById': grpc.unary_unary_rpc_method_handler(
servicer.getDatasetById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getDatasetByName': grpc.unary_unary_rpc_method_handler(
servicer.getDatasetByName,
request_deserializer=metadata__service__pb2.NameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'listDatasets': grpc.unary_unary_rpc_method_handler(
servicer.listDatasets,
request_deserializer=metadata__service__pb2.ListRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'registerDataset': grpc.unary_unary_rpc_method_handler(
servicer.registerDataset,
request_deserializer=metadata__service__pb2.RegisterDatasetRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'registerDatasetWithCatalog': grpc.unary_unary_rpc_method_handler(
servicer.registerDatasetWithCatalog,
request_deserializer=metadata__service__pb2.RegisterDatasetRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'registerDatasets': grpc.unary_unary_rpc_method_handler(
servicer.registerDatasets,
request_deserializer=metadata__service__pb2.RegisterDatasetsRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'updateDataset': grpc.unary_unary_rpc_method_handler(
servicer.updateDataset,
request_deserializer=metadata__service__pb2.UpdateDatasetRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteDatasetById': grpc.unary_unary_rpc_method_handler(
servicer.deleteDatasetById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteDatasetByName': grpc.unary_unary_rpc_method_handler(
servicer.deleteDatasetByName,
request_deserializer=metadata__service__pb2.NameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getModelRelationById': grpc.unary_unary_rpc_method_handler(
servicer.getModelRelationById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getModelRelationByName': grpc.unary_unary_rpc_method_handler(
servicer.getModelRelationByName,
request_deserializer=metadata__service__pb2.NameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'listModelRelation': grpc.unary_unary_rpc_method_handler(
servicer.listModelRelation,
request_deserializer=metadata__service__pb2.ListRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'registerModelRelation': grpc.unary_unary_rpc_method_handler(
servicer.registerModelRelation,
request_deserializer=metadata__service__pb2.RegisterModelRelationRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteModelRelationById': grpc.unary_unary_rpc_method_handler(
servicer.deleteModelRelationById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteModelRelationByName': grpc.unary_unary_rpc_method_handler(
servicer.deleteModelRelationByName,
request_deserializer=metadata__service__pb2.NameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getModelById': grpc.unary_unary_rpc_method_handler(
servicer.getModelById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getModelByName': grpc.unary_unary_rpc_method_handler(
servicer.getModelByName,
request_deserializer=metadata__service__pb2.NameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'registerModel': grpc.unary_unary_rpc_method_handler(
servicer.registerModel,
request_deserializer=metadata__service__pb2.RegisterModelRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteModelById': grpc.unary_unary_rpc_method_handler(
servicer.deleteModelById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteModelByName': grpc.unary_unary_rpc_method_handler(
servicer.deleteModelByName,
request_deserializer=metadata__service__pb2.NameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getModelVersionRelationByVersion': grpc.unary_unary_rpc_method_handler(
servicer.getModelVersionRelationByVersion,
request_deserializer=metadata__service__pb2.ModelVersionNameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'listModelVersionRelation': grpc.unary_unary_rpc_method_handler(
servicer.listModelVersionRelation,
request_deserializer=metadata__service__pb2.ListModelVersionRelationRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'registerModelVersionRelation': grpc.unary_unary_rpc_method_handler(
servicer.registerModelVersionRelation,
request_deserializer=metadata__service__pb2.RegisterModelVersionRelationRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteModelVersionRelationByVersion': grpc.unary_unary_rpc_method_handler(
servicer.deleteModelVersionRelationByVersion,
request_deserializer=metadata__service__pb2.ModelVersionNameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getModelVersionByVersion': grpc.unary_unary_rpc_method_handler(
servicer.getModelVersionByVersion,
request_deserializer=metadata__service__pb2.ModelVersionNameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'registerModelVersion': grpc.unary_unary_rpc_method_handler(
servicer.registerModelVersion,
request_deserializer=metadata__service__pb2.RegisterModelVersionRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteModelVersionByVersion': grpc.unary_unary_rpc_method_handler(
servicer.deleteModelVersionByVersion,
request_deserializer=metadata__service__pb2.ModelVersionNameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getDeployedModelVersion': grpc.unary_unary_rpc_method_handler(
servicer.getDeployedModelVersion,
request_deserializer=metadata__service__pb2.ModelNameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getLatestValidatedModelVersion': grpc.unary_unary_rpc_method_handler(
servicer.getLatestValidatedModelVersion,
request_deserializer=metadata__service__pb2.ModelNameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getLatestGeneratedModelVersion': grpc.unary_unary_rpc_method_handler(
servicer.getLatestGeneratedModelVersion,
request_deserializer=metadata__service__pb2.ModelNameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getProjectById': grpc.unary_unary_rpc_method_handler(
servicer.getProjectById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getProjectByName': grpc.unary_unary_rpc_method_handler(
servicer.getProjectByName,
request_deserializer=metadata__service__pb2.NameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'registerProject': grpc.unary_unary_rpc_method_handler(
servicer.registerProject,
request_deserializer=metadata__service__pb2.RegisterProjectRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'updateProject': grpc.unary_unary_rpc_method_handler(
servicer.updateProject,
request_deserializer=metadata__service__pb2.UpdateProjectRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'listProject': grpc.unary_unary_rpc_method_handler(
servicer.listProject,
request_deserializer=metadata__service__pb2.ListRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteProjectById': grpc.unary_unary_rpc_method_handler(
servicer.deleteProjectById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteProjectByName': grpc.unary_unary_rpc_method_handler(
servicer.deleteProjectByName,
request_deserializer=metadata__service__pb2.NameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getArtifactById': grpc.unary_unary_rpc_method_handler(
servicer.getArtifactById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getArtifactByName': grpc.unary_unary_rpc_method_handler(
servicer.getArtifactByName,
request_deserializer=metadata__service__pb2.NameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'updateArtifact': grpc.unary_unary_rpc_method_handler(
servicer.updateArtifact,
request_deserializer=metadata__service__pb2.UpdateArtifactRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'registerArtifact': grpc.unary_unary_rpc_method_handler(
servicer.registerArtifact,
request_deserializer=metadata__service__pb2.RegisterArtifactRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'listArtifact': grpc.unary_unary_rpc_method_handler(
servicer.listArtifact,
request_deserializer=metadata__service__pb2.ListRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteArtifactById': grpc.unary_unary_rpc_method_handler(
servicer.deleteArtifactById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteArtifactByName': grpc.unary_unary_rpc_method_handler(
servicer.deleteArtifactByName,
request_deserializer=metadata__service__pb2.NameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'registerWorkflow': grpc.unary_unary_rpc_method_handler(
servicer.registerWorkflow,
request_deserializer=metadata__service__pb2.RegisterWorkflowRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'updateWorkflow': grpc.unary_unary_rpc_method_handler(
servicer.updateWorkflow,
request_deserializer=metadata__service__pb2.UpdateWorkflowRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getWorkflowById': grpc.unary_unary_rpc_method_handler(
servicer.getWorkflowById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'getWorkflowByName': grpc.unary_unary_rpc_method_handler(
servicer.getWorkflowByName,
request_deserializer=metadata__service__pb2.WorkflowNameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteWorkflowById': grpc.unary_unary_rpc_method_handler(
servicer.deleteWorkflowById,
request_deserializer=metadata__service__pb2.IdRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'deleteWorkflowByName': grpc.unary_unary_rpc_method_handler(
servicer.deleteWorkflowByName,
request_deserializer=metadata__service__pb2.WorkflowNameRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
'listWorkflows': grpc.unary_unary_rpc_method_handler(
servicer.listWorkflows,
request_deserializer=metadata__service__pb2.ListWorkflowsRequest.FromString,
response_serializer=message__pb2.Response.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'ai_flow.MetadataService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class MetadataService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def getDatasetById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getDatasetById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getDatasetByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getDatasetByName',
metadata__service__pb2.NameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def listDatasets(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/listDatasets',
metadata__service__pb2.ListRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def registerDataset(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/registerDataset',
metadata__service__pb2.RegisterDatasetRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def registerDatasetWithCatalog(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/registerDatasetWithCatalog',
metadata__service__pb2.RegisterDatasetRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def registerDatasets(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/registerDatasets',
metadata__service__pb2.RegisterDatasetsRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def updateDataset(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/updateDataset',
metadata__service__pb2.UpdateDatasetRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteDatasetById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteDatasetById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteDatasetByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteDatasetByName',
metadata__service__pb2.NameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getModelRelationById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getModelRelationById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getModelRelationByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getModelRelationByName',
metadata__service__pb2.NameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def listModelRelation(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/listModelRelation',
metadata__service__pb2.ListRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def registerModelRelation(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/registerModelRelation',
metadata__service__pb2.RegisterModelRelationRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteModelRelationById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteModelRelationById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteModelRelationByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteModelRelationByName',
metadata__service__pb2.NameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getModelById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getModelById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getModelByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getModelByName',
metadata__service__pb2.NameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def registerModel(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/registerModel',
metadata__service__pb2.RegisterModelRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteModelById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteModelById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteModelByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteModelByName',
metadata__service__pb2.NameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getModelVersionRelationByVersion(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getModelVersionRelationByVersion',
metadata__service__pb2.ModelVersionNameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def listModelVersionRelation(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/listModelVersionRelation',
metadata__service__pb2.ListModelVersionRelationRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def registerModelVersionRelation(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/registerModelVersionRelation',
metadata__service__pb2.RegisterModelVersionRelationRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteModelVersionRelationByVersion(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteModelVersionRelationByVersion',
metadata__service__pb2.ModelVersionNameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getModelVersionByVersion(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getModelVersionByVersion',
metadata__service__pb2.ModelVersionNameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def registerModelVersion(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/registerModelVersion',
metadata__service__pb2.RegisterModelVersionRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteModelVersionByVersion(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteModelVersionByVersion',
metadata__service__pb2.ModelVersionNameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getDeployedModelVersion(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getDeployedModelVersion',
metadata__service__pb2.ModelNameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getLatestValidatedModelVersion(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getLatestValidatedModelVersion',
metadata__service__pb2.ModelNameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getLatestGeneratedModelVersion(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getLatestGeneratedModelVersion',
metadata__service__pb2.ModelNameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getProjectById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getProjectById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getProjectByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getProjectByName',
metadata__service__pb2.NameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def registerProject(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/registerProject',
metadata__service__pb2.RegisterProjectRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def updateProject(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/updateProject',
metadata__service__pb2.UpdateProjectRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def listProject(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/listProject',
metadata__service__pb2.ListRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteProjectById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteProjectById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteProjectByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteProjectByName',
metadata__service__pb2.NameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getArtifactById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getArtifactById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getArtifactByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getArtifactByName',
metadata__service__pb2.NameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def updateArtifact(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/updateArtifact',
metadata__service__pb2.UpdateArtifactRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def registerArtifact(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/registerArtifact',
metadata__service__pb2.RegisterArtifactRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def listArtifact(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/listArtifact',
metadata__service__pb2.ListRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteArtifactById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteArtifactById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteArtifactByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteArtifactByName',
metadata__service__pb2.NameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def registerWorkflow(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/registerWorkflow',
metadata__service__pb2.RegisterWorkflowRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def updateWorkflow(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/updateWorkflow',
metadata__service__pb2.UpdateWorkflowRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getWorkflowById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getWorkflowById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def getWorkflowByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/getWorkflowByName',
metadata__service__pb2.WorkflowNameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteWorkflowById(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteWorkflowById',
metadata__service__pb2.IdRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def deleteWorkflowByName(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/deleteWorkflowByName',
metadata__service__pb2.WorkflowNameRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def listWorkflows(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/ai_flow.MetadataService/listWorkflows',
metadata__service__pb2.ListWorkflowsRequest.SerializeToString,
message__pb2.Response.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 47.252294 | 125 | 0.661756 | 6,784 | 82,408 | 7.733638 | 0.033461 | 0.029544 | 0.053178 | 0.054436 | 0.842142 | 0.80425 | 0.767312 | 0.712323 | 0.694196 | 0.692023 | 0 | 0.005198 | 0.266904 | 82,408 | 1,743 | 126 | 47.279403 | 0.863241 | 0.046767 | 0 | 0.699935 | 0 | 0 | 0.099311 | 0.061908 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067401 | false | 0 | 0.001944 | 0.033052 | 0.104342 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
78e27123b9237bcab16f37c0f19ff03e7ee1814b | 411 | py | Python | src/events/views.py | hijal/event-management-system | 92fa74035bec846e3448bf210bb5d5088bbf639e | [
"bzip2-1.0.6"
] | 3 | 2021-09-27T00:45:42.000Z | 2021-09-28T16:42:05.000Z | src/events/views.py | hijal/event-management-system | 92fa74035bec846e3448bf210bb5d5088bbf639e | [
"bzip2-1.0.6"
] | null | null | null | src/events/views.py | hijal/event-management-system | 92fa74035bec846e3448bf210bb5d5088bbf639e | [
"bzip2-1.0.6"
] | null | null | null | from django.contrib.auth import authenticate, login, get_user_model
from django.shortcuts import render, redirect
def home_page(request):
return render(request, 'home_page.html', {})
def about_page(request):
return render(request, 'about.html', {})
def gallery_page(request):
return render(request, 'gallery.html', {})
def contact_page(request):
return render(request, 'contact.html', {}) | 24.176471 | 67 | 0.729927 | 53 | 411 | 5.528302 | 0.415094 | 0.150171 | 0.232082 | 0.313993 | 0.409556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138686 | 411 | 17 | 68 | 24.176471 | 0.827684 | 0 | 0 | 0 | 0 | 0 | 0.116505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
78ed126a4d40ed6afe442de65005762b6c0a8577 | 78 | py | Python | blogs/prefect-docker/docker_with_local_storage/components/componentA.py | kvnkho/demos | c8c33993e00baf6a25d0ffdc44db924b327cbffa | [
"MIT"
] | 13 | 2021-05-13T23:07:17.000Z | 2022-03-19T00:00:41.000Z | prefect/docker_with_local_storage/components/componentA.py | astraway/demos | 1776ba05ec5c3c9afd1e54a2ab00c85b69f8f7fa | [
"MIT"
] | null | null | null | prefect/docker_with_local_storage/components/componentA.py | astraway/demos | 1776ba05ec5c3c9afd1e54a2ab00c85b69f8f7fa | [
"MIT"
] | 7 | 2021-06-16T18:16:55.000Z | 2022-03-21T03:34:43.000Z | class ComponentA:
def __init__(self, n=2) -> None:
self.n = n | 19.5 | 36 | 0.538462 | 11 | 78 | 3.454545 | 0.727273 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0.333333 | 78 | 4 | 37 | 19.5 | 0.711538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
6002231630437111f5bda952cfc31f3d5539c06d | 22 | py | Python | Lib/test/test_import/data/circular_imports/util.py | sireliah/polish-python | 605df4944c2d3bc25f8bf6964b274c0a0d297cc3 | [
"PSF-2.0"
] | 1 | 2018-06-21T18:21:24.000Z | 2018-06-21T18:21:24.000Z | Lib/test/test_import/data/circular_imports/util.py | sireliah/polish-python | 605df4944c2d3bc25f8bf6964b274c0a0d297cc3 | [
"PSF-2.0"
] | null | null | null | Lib/test/test_import/data/circular_imports/util.py | sireliah/polish-python | 605df4944c2d3bc25f8bf6964b274c0a0d297cc3 | [
"PSF-2.0"
] | null | null | null | def util():
dalej
| 7.333333 | 11 | 0.545455 | 3 | 22 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.318182 | 22 | 2 | 12 | 11 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6034b9f6c82b4921f3a76ddd8a1f489ef83e49f4 | 21 | py | Python | Hello_World/Hello-World-Python/DhvanilP.py | baroood/Hacktoberfest-2k17 | 87383df4bf705358866a5a4120dd678a3f2acd3e | [
"MIT"
] | 28 | 2017-10-04T19:42:26.000Z | 2021-03-26T04:00:48.000Z | Hello_World/Hello-World-Python/DhvanilP.py | baroood/Hacktoberfest-2k17 | 87383df4bf705358866a5a4120dd678a3f2acd3e | [
"MIT"
] | 375 | 2017-09-28T02:58:37.000Z | 2019-10-31T09:10:38.000Z | Hello_World/Hello-World-Python/DhvanilP.py | baroood/Hacktoberfest-2k17 | 87383df4bf705358866a5a4120dd678a3f2acd3e | [
"MIT"
] | 519 | 2017-09-28T02:40:29.000Z | 2021-02-15T08:29:17.000Z | print("Hellow World") | 21 | 21 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 21 | 1 | 21 | 21 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
6043b67507f22ff08b2b3f0f38a6a27d6149ad1b | 7,102 | py | Python | Linux.py | xoThatsmeVeahxo/RSEA | 892d09f57873a0d498489a6fb251f25d83e7b91f | [
"MIT"
] | 2 | 2022-02-09T22:58:11.000Z | 2022-02-13T22:47:56.000Z | Linux.py | xoThatsmeVeahxo/RSEA | 892d09f57873a0d498489a6fb251f25d83e7b91f | [
"MIT"
] | null | null | null | Linux.py | xoThatsmeVeahxo/RSEA | 892d09f57873a0d498489a6fb251f25d83e7b91f | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
_r=range;_p=print;_i=input;_c='\033c'
def K():
from secrets import choice as ch;cc='çÇëËሐኀሃሓኃሁሑኁሂሒኂሄሔኄህሕኅሆሖኆለሉሊላሌልሎሏመሙሚማሜምሞሟሠሰሡሱሢሲሣሳሤሴሥስሦሶረሩሪራሬርሮሸሹሺሻሼሽሾቀቁቂቃቄቅቆቋበቡቢባቤብቦተቱቲታቴትቶቷቸቹቺቻቼችቾነኑኒናኔንኖኘኙኚኛኜኝኞአዐኡዑኢዒኣዓኤዔእዕኦዖከኩኪካኬክኮኸኹኺኻኼኽኾወዊዋዌውዉዎዘዙዚዛዜዝዞዠዡዢዣዤዥዦየዩዪያዬይዮደዱዲዳዴድዶጀጁጂጃጄጅጆገጉጊጋጌግጎጐጓጠጡጢጣጤጥጦጨጩጪጫጬጭጮጰጱጲጳጴጵጶጸፀጹፁጺፂጻፃጼፄጽፅጾፆፈፉፊፋፌፍፎፐፑፒፓፔፕፖαβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΞΠΣΦΨΩ';from random import randint as ri;a=''.join(ch(cc)for _ in _r(ri(2,3)));b=''.join(ch(cc)for _ in _r(ri(2,3)));c=''.join(ch(cc)for _ in _r(ri(2,3)));d=''.join(ch(cc)for _ in _r(ri(2,3)));e=''.join(ch(cc)for _ in _r(ri(2,3)));f=''.join(ch(cc)for _ in _r(ri(2,3)));g=''.join(ch(cc)for _ in _r(ri(2,3)));h=''.join(ch(cc)for _ in _r(ri(2,3)));i=''.join(ch(cc)for _ in _r(ri(2,3)));j=''.join(ch(cc)for _ in _r(ri(2,3)));k=''.join(ch(cc)for _ in _r(ri(2,3)));l=''.join(ch(cc)for _ in _r(ri(2,3)));m=''.join(ch(cc)for _ in _r(ri(2,3)));n=''.join(ch(cc)for _ in _r(ri(2,3)));o=''.join(ch(cc)for _ in _r(ri(2,3)));p=''.join(ch(cc)for _ in _r(ri(2,3)));q=''.join(ch(cc)for _ in _r(ri(2,3)));r=''.join(ch(cc)for _ in _r(ri(2,3)));s=''.join(ch(cc)for _ in _r(ri(2,3)));t=''.join(ch(cc)for _ in _r(ri(2,3)));u=''.join(ch(cc)for _ in _r(ri(2,3)));v=''.join(ch(cc)for _ in _r(ri(2,3)));w=''.join(ch(cc)for _ in _r(ri(2,3)));x=''.join(ch(cc)for _ in _r(ri(2,3)));y=''.join(ch(cc)for _ in _r(ri(2,3)));z=''.join(ch(cc)for _ in _r(ri(2,3)));A=''.join(ch(cc)for _ in _r(ri(2,3)));B=''.join(ch(cc)for _ in _r(ri(2,3)));C=''.join(ch(cc)for _ in _r(ri(2,3)));D=''.join(ch(cc)for _ in _r(ri(2,3)));E=''.join(ch(cc)for _ in _r(ri(2,3)));F=''.join(ch(cc)for _ in _r(ri(2,3)));G=''.join(ch(cc)for _ in _r(ri(2,3)));H=''.join(ch(cc)for _ in _r(ri(2,3)));I=''.join(ch(cc)for _ in _r(ri(2,3)));J=''.join(ch(cc)for _ in _r(ri(2,3)));K=''.join(ch(cc)for _ in _r(ri(2,3)));L=''.join(ch(cc)for _ in _r(ri(2,3)));M=''.join(ch(cc)for _ in _r(ri(2,3)));N=''.join(ch(cc)for _ in _r(ri(2,3)));O=''.join(ch(cc)for _ in _r(ri(2,3)));P=''.join(ch(cc)for _ in _r(ri(2,3)));Q=''.join(ch(cc)for _ in _r(ri(2,3)));R=''.join(ch(cc)for _ in _r(ri(2,3)));S=''.join(ch(cc)for _ in _r(ri(2,3)));T=''.join(ch(cc)for _ in _r(ri(2,3)));U=''.join(ch(cc)for _ in _r(ri(2,3)));V=''.join(ch(cc)for _ in _r(ri(2,3)));W=''.join(ch(cc)for _ in _r(ri(2,3)));X=''.join(ch(cc)for _ in _r(ri(2,3)));Y=''.join(ch(cc)for _ in _r(ri(2,3)));Z=''.join(ch(cc)for _ in _r(ri(2,3)));_0=''.join(ch(cc)for _ in _r(ri(2,3)));_1=''.join(ch(cc)for _ in _r(ri(2,3)));_2=''.join(ch(cc)for _ in _r(ri(2,3)));_3=''.join(ch(cc)for _ in _r(ri(2,3)));_4=''.join(ch(cc)for _ in _r(ri(2,3)));_5=''.join(ch(cc)for _ in _r(ri(2,3)));_6=''.join(ch(cc)for _ in _r(ri(2,3)));_7=''.join(ch(cc)for _ in _r(ri(2,3)));_8=''.join(ch(cc)for _ in _r(ri(2,3)));_9=''.join(ch(cc)for _ in _r(ri(2,3)));C1=''.join(ch(cc)for _ in _r(ri(2,3)));C2=''.join(ch(cc)for _ in _r(ri(2,3)));C3=''.join(ch(cc)for _ in _r(ri(2,3)));C4=''.join(ch(cc)for _ in _r(ri(2,3)));C5=''.join(ch(cc)for _ in _r(ri(2,3)));C6=''.join(ch(cc)for _ in _r(ri(2,3)));C7=''.join(ch(cc)for _ in _r(ri(2,3)));C8=''.join(ch(cc)for _ in _r(ri(2,3)));C9=''.join(ch(cc)for _ in _r(ri(2,3)));C10=''.join(ch(cc)for _ in _r(ri(2,3)));C11=''.join(ch(cc)for _ in _r(ri(2,3)));S1=''.join(ch(cc)for _ in _r(ri(2,3)));S2=''.join(ch(cc)for _ in _r(ri(2,3)));S3=''.join(ch(cc)for _ in _r(ri(2,3)));S4=''.join(ch(cc)for _ in _r(ri(2,3)))
with open('o.py','w')as _f:_f.write("a='"+a+"';b='"+b+"';c='"+c+"';d='"+d+"';e='"+e+"';f='"+f+"';g='"+g+"';h='"+h+"';i='"+i+"';j='"+j+"';k='"+k+"';l='"+l+"';m='"+m+"';n='"+n+"';o='"+o+"';p='"+p+"';q='"+q+"';r='"+r+"';s='"+s+"';t='"+t+"';u='"+u+"';v='"+v+"';w='"+w+"';x='"+x+"';y='"+y+"';z='"+z+"';A='"+A+"';B='"+B+"';C='"+C+"';D='"+D+"';E='"+E+"';F='"+F+"';G='"+G+"';H='"+H+"';I='"+I+"';J='"+J+"';K='"+K+"';L='"+L+"';M='"+M+"';N='"+N+"';O='"+O+"';P='"+P+"';Q='"+Q+"';R='"+R+"';S='"+S+"';T='"+T+"';U='"+U+"';V='"+V+"';W='"+W+"';X='"+X+"';Y='"+Y+"';Z='"+Z+"';_0='"+_0+"';_1='"+_1+"';_2='"+_2+"';_3='"+_3+"';_4='"+_4+"';_5='"+_5+"';_6='"+_6+"';_7='"+_7+"';_8='"+_8+"';_9='"+_9+"';C1='"+C1+"';C2='"+C2+"';C3='"+C3+"';C4='"+C4+"';C5='"+C5+"';C6='"+C6+"';C7='"+C7+"';C8='"+C8+"';C9='"+C9+"';C10='"+C10+"';C11='"+C11+"';S1='"+S1+"';S2='"+S2+"';S3='"+S3+"';S4='"+S4+"'")
def F():from o import a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z,_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,C1,C2,C3,C4,C5,C6,C7,C8,C9,C10,C11,S1,S2,S3,S4;_p(_i(_c+'')[::-1].replace('a',S1+a).replace('b',S2+b).replace('c',S3+c).replace('d',S4+d).replace('e',S2+e).replace('f',S3+f).replace('g',S1+g).replace('h',S2+h).replace('i',S3+i).replace('j',S1+j).replace('k',S2+k).replace('l',S3+l).replace('m',S1+m).replace('n',S2+n).replace('o',S3+o).replace('p',S1+p).replace('q',S2+q).replace('r',S3+r).replace('s',S1+s).replace('t',S2+t).replace('u',S3+u).replace('v',S1+v).replace('w',S2+w).replace('x',S3+x).replace('y',S1+y).replace('z',S2+z).replace('A',S3+A).replace('B',S1+B).replace('C',S2+C).replace('D',S3+D).replace('E',S1+E).replace('F',S3+F).replace('G',S1+G).replace('H',S2+H).replace('I',S3+I).replace('J',S1+J).replace('K',S2+K).replace('L',S3+L).replace('M',S1+M).replace('N',S2+N).replace('O',S3+O).replace('P',S1+P).replace('Q',S2+Q).replace('R',S3+R).replace('S',S1+S).replace('T',S2+T).replace('U',S3+U).replace('V',S1+V).replace('W',S2+W).replace('X',S3+X).replace('Y',S1+Y).replace('Z',S2+Z).replace('0',S3+_0).replace('1',S1+_1).replace('2',S2+_2).replace('3',S3+_3).replace('4',S1+_4).replace('5',S2+_5).replace('6',S3+_6).replace('7',S1+_7).replace('8',S2+_8).replace('9',S3+_9).replace(' ',S1+C1).replace('.',S2+C2).replace('?',S3+C3).replace('!',S1+C4).replace(',',S2+C5).replace("'",S3+C6).replace('-',S1+C7).replace(';',S2+C8).replace(':',S3+C9).replace('_',S1+C10).replace('/',S2+C11));_p(_i('\n').replace(S1,'').replace(S2,'').replace(S3,'').replace(S4,'').replace(a,'a').replace(b,'b').replace(c,'c').replace(d,'d').replace(e,'e').replace(f,'f').replace(g,'g').replace(h,'h').replace(i,'i').replace(j,'j').replace(k,'k').replace(l,'l').replace(m,'m').replace(n,'n').replace(o,'o').replace(p,'p').replace(q,'q').replace(r,'r').replace(s,'s').replace(t,'t').replace(u,'u').replace(v,'v').replace(w,'w').replace(x,'x').replace(y,'y').replace(z,'z').replace(A,'A').replace(B,'B').replace(C,'C').replace(D,'D').replace(E,'E').replace(F,'F').replace(G,'G').replace(H,'H').replace(I,'I').replace(J,'J').replace(K,'K').replace(L,'L').replace(M,'M').replace(N,'N').replace(O,'O').replace(P,'P').replace(Q,'Q').replace(R,'R').replace(S,'S').replace(T,'T').replace(U,'U').replace(V,'V').replace(W,'W').replace(X,'X').replace(Y,'Y').replace(Z,'Z').replace(_0,'0').replace(_1,'1').replace(_2,'2').replace(_3,'3').replace(_4,'4').replace(_5,'5').replace(_6,'6').replace(_7,'7').replace(_8,'8').replace(_9,'9').replace(C1,' ').replace(C2,'.').replace(C3,'?').replace(C4,'!').replace(C5,',').replace(C6,"'").replace(C7,'-').replace(C8,';').replace(C9,':').replace(C10,'_').replace(C11,'/')[::-1]),_i()
def I():
i=_i(_c+'1 Create Key\n2 Encrypt/Decrypt\n\n')
if i=='1':K()
if i=='2':F()
if i!='12':I()
I() | 591.833333 | 3,299 | 0.557871 | 1,575 | 7,102 | 2.369524 | 0.054603 | 0.042337 | 0.165059 | 0.226956 | 0.682476 | 0.682476 | 0.682476 | 0.682476 | 0.682476 | 0.561897 | 0 | 0.061081 | 0.038721 | 7,102 | 12 | 3,300 | 591.833333 | 0.485572 | 0.002957 | 0 | 0 | 0 | 0 | 0.124559 | 0.038695 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.181818 | 0 | 0.454545 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
605d6e45455f076fdcaacaa11eadf3517d07b700 | 263 | py | Python | python_modules/libraries/dagster-ssh/dagster_ssh/__init__.py | JPeer264/dagster-fork | 32cc87a36134be7c442fa85d6867eb1d3301aea0 | [
"Apache-2.0"
] | 1 | 2020-09-19T16:35:59.000Z | 2020-09-19T16:35:59.000Z | python_modules/libraries/dagster-ssh/dagster_ssh/__init__.py | JPeer264/dagster-fork | 32cc87a36134be7c442fa85d6867eb1d3301aea0 | [
"Apache-2.0"
] | null | null | null | python_modules/libraries/dagster-ssh/dagster_ssh/__init__.py | JPeer264/dagster-fork | 32cc87a36134be7c442fa85d6867eb1d3301aea0 | [
"Apache-2.0"
] | null | null | null | from dagster.core.utils import check_dagster_package_version
from .resources import ssh_resource
from .solids import sftp_solid
from .version import __version__
check_dagster_package_version('dagster-ssh', __version__)
__all__ = ['ssh_resource', 'sftp_solid']
| 26.3 | 60 | 0.8327 | 35 | 263 | 5.628571 | 0.428571 | 0.121827 | 0.192893 | 0.263959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095057 | 263 | 9 | 61 | 29.222222 | 0.827731 | 0 | 0 | 0 | 0 | 0 | 0.125475 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
6060b56641e8ac63434a745d011c4534d1d8693a | 204 | py | Python | app/home/home.py | jasoncordis/Spotify-Playlist-Generator1 | 149941ec16167a32c076ee98a8325555411d614d | [
"MIT"
] | 75 | 2020-10-04T11:04:24.000Z | 2022-03-30T15:54:36.000Z | app/home/home.py | jasoncordis/Spotify-Playlist-Generator1 | 149941ec16167a32c076ee98a8325555411d614d | [
"MIT"
] | 3 | 2020-10-04T20:25:59.000Z | 2021-07-12T10:11:57.000Z | app/home/home.py | jasoncordis/Spotify-Playlist-Generator1 | 149941ec16167a32c076ee98a8325555411d614d | [
"MIT"
] | 14 | 2020-10-04T11:17:42.000Z | 2022-03-18T07:28:39.000Z | from flask import render_template, Blueprint
home_blueprint = Blueprint('home_bp', __name__, template_folder='templates')
@home_blueprint.route("/")
def home():
return render_template('home.html')
| 22.666667 | 76 | 0.764706 | 25 | 204 | 5.84 | 0.6 | 0.191781 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107843 | 204 | 8 | 77 | 25.5 | 0.802198 | 0 | 0 | 0 | 0 | 0 | 0.127451 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0.6 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 5 |
6074a079ae39206d2e1f3e3a0fd51ab2d7f7f845 | 24,682 | py | Python | src/the_tale/the_tale/game/bills/tests/test_prototype.py | Alacrate/the-tale | 43b211f3a99e93964e95abc20a8ed649a205ffcf | [
"BSD-3-Clause"
] | 85 | 2017-11-21T12:22:02.000Z | 2022-03-27T23:07:17.000Z | src/the_tale/the_tale/game/bills/tests/test_prototype.py | Alacrate/the-tale | 43b211f3a99e93964e95abc20a8ed649a205ffcf | [
"BSD-3-Clause"
] | 545 | 2017-11-04T14:15:04.000Z | 2022-03-27T14:19:27.000Z | src/the_tale/the_tale/game/bills/tests/test_prototype.py | Alacrate/the-tale | 43b211f3a99e93964e95abc20a8ed649a205ffcf | [
"BSD-3-Clause"
] | 45 | 2017-11-11T12:36:30.000Z | 2022-02-25T06:10:44.000Z |
import smart_imports
smart_imports.all()
class BillPrototypeTests(helpers.BaseTestPrototypes):
def setUp(self):
super(BillPrototypeTests, self).setUp()
self.hero = heroes_logic.load_hero(account_id=self.account2.id)
game_tt_services.debug_clear_service()
def create_bill(self, account=None, depends_on_id=None):
if account is None:
account = self.account1
bill_data = bills.place_renaming.PlaceRenaming(place_id=self.place1.id, name_forms=game_names.generator().get_test_name('new_name_1'))
return prototypes.BillPrototype.create(account,
'bill-1-caption',
bill_data,
chronicle_on_accepted='chronicle-on-accepted',
depends_on_id=depends_on_id)
def test_accepted_bills_count(self):
for state in relations.BILL_STATE.records:
bill = self.create_bill(self.account1)
bill.state = state
bill.save()
for state in relations.BILL_STATE.records:
bill = self.create_bill(self.account2)
bill.state = state
bill.save()
self.assertEqual(prototypes.BillPrototype.accepted_bills_count(self.account1.id), 1)
self.assertEqual(prototypes.BillPrototype.accepted_bills_count(self.account2.id), 1)
self.assertEqual(prototypes.BillPrototype.accepted_bills_count(self.account3.id), 0)
def test_is_active_bills_limit_reached(self):
for i in range(c.ACCOUNT_MAX_ACTIVE_BILLS):
self.assertFalse(prototypes.BillPrototype.is_active_bills_limit_reached(self.account1))
self.create_bill()
self.assertTrue(prototypes.BillPrototype.is_active_bills_limit_reached(self.account1))
@mock.patch('the_tale.game.places.objects.Place.is_new', False)
def test_can_vote__places_restrictions__no_places(self):
bill = self.create_bill()
with mock.patch('the_tale.game.bills.bills.place_renaming.PlaceRenaming.actors', []):
self.assertTrue(bill.can_vote(self.hero))
@mock.patch('the_tale.game.places.objects.Place.is_new', False)
def test_can_vote__places_restrictions__no_allowed_places(self):
bill = self.create_bill()
with mock.patch('the_tale.game.bills.bills.place_renaming.PlaceRenaming.actors', [self.place1, self.place2, self.place3]):
self.assertFalse(bill.can_vote(self.hero))
@mock.patch('the_tale.game.places.objects.Place.is_new', True)
def test_can_vote__places_restrictions__no_allowed_places__with_timeout(self):
bill = self.create_bill()
with mock.patch('the_tale.game.bills.bills.place_renaming.PlaceRenaming.actors', [self.place1, self.place2, self.place3]):
self.assertTrue(bill.can_vote(self.hero))
@mock.patch('the_tale.game.places.objects.Place.is_new', False)
def test_can_vote__places_restrictions__allowed_place(self):
bill = self.create_bill()
places_logic.add_fame(self.hero.id, fames=[(self.place2.id, c.BILLS_FAME_BORDER)])
with mock.patch('the_tale.game.bills.bills.place_renaming.PlaceRenaming.actors', [self.place1, self.place2, self.place3]):
self.assertTrue(bill.can_vote(self.hero))
@mock.patch('the_tale.game.places.objects.Place.is_new', False)
def test_can_vote__places_restrictions__fame_border(self):
bill = self.create_bill()
places_logic.add_fame(self.hero.id, fames=[(self.place2.id, c.BILLS_FAME_BORDER-1)])
with mock.patch('the_tale.game.bills.bills.place_renaming.PlaceRenaming.actors', [self.place1, self.place2, self.place3]):
self.assertFalse(bill.can_vote(self.hero))
def test_remove_duplicate_actors(self):
bill = self.create_bill()
with mock.patch('the_tale.game.bills.bills.place_renaming.PlaceRenaming.actors', [self.place1, self.place1, self.place3]):
self.assertEqual(bill.actors, [self.place1, self.place3])
def test_is_delayed__no_dependencies(self):
bill = self.create_bill()
self.assertFalse(bill.is_delayed)
def test_is_delayed__has_dependencies(self):
base_bill = self.create_bill()
child_bill = self.create_bill(depends_on_id=base_bill.id)
self.assertTrue(child_bill.is_delayed)
def test_has_meaning_with_dependency_state(self):
base_bill = self.create_bill()
child_bill = self.create_bill(depends_on_id=base_bill.id)
for state in relations.BILL_STATE.records:
base_bill.state = state
base_bill.save()
child_bill.reload()
self.assertEqual(not state.break_dependent_bills, child_bill.has_meaning())
class TestPrototypeApply(helpers.BaseTestPrototypes):
def setUp(self):
super(TestPrototypeApply, self).setUp()
bill_data = bills.place_renaming.PlaceRenaming(place_id=self.place1.id, name_forms=game_names.generator().get_test_name('new_name_1'))
self.bill = prototypes.BillPrototype.create(self.account1, 'bill-1-caption', bill_data, chronicle_on_accepted='chronicle-on-accepted')
self.bill.approved_by_moderator = True
self.bill.save()
def check_place(self, place_id, name, name_forms):
self.assertEqual(places_storage.places[place_id].name, name)
self.assertEqual(places_storage.places[place_id].utg_name.forms, name_forms)
@mock.patch('the_tale.game.bills.prototypes.BillPrototype.time_before_voting_end', lambda x: datetime.timedelta(seconds=0))
def test_wrong_state(self):
self.bill.state = relations.BILL_STATE.ACCEPTED
self.bill.save()
self.assertRaises(exceptions.ApplyBillInWrongStateError, self.bill.apply)
places_storage.places.sync(force=True)
self.check_place(self.place1.id, self.place1.name, self.place1.utg_name.forms)
@mock.patch('the_tale.game.bills.prototypes.BillPrototype.time_before_voting_end', lambda x: datetime.timedelta(seconds=0))
def test_not_approved(self):
self.bill.approved_by_moderator = False
self.bill.save()
self.assertRaises(exceptions.ApplyUnapprovedBillError, self.bill.apply)
places_storage.places.sync(force=True)
self.assertEqual(self.bill.applyed_at_turn, None)
self.check_place(self.place1.id, self.place1.name, self.place1.utg_name.forms)
def test_wrong_time(self):
self.assertRaises(exceptions.ApplyBillBeforeVoteWasEndedError, self.bill.apply)
places_storage.places.sync(force=True)
self.check_place(self.place1.id, self.place1.name, self.place1.utg_name.forms)
@mock.patch('the_tale.game.bills.conf.settings.MIN_VOTES_PERCENT', 0.51)
@mock.patch('the_tale.game.bills.prototypes.BillPrototype.time_before_voting_end', datetime.timedelta(seconds=0))
def test_not_enough_voices_percents(self):
chronicle_tt_services.chronicle.cmd_debug_clear_service()
game_turn.increment()
game_turn.increment()
prototypes.VotePrototype.create(self.account2, self.bill, relations.VOTE_TYPE.AGAINST)
prototypes.VotePrototype.create(self.account3, self.bill, relations.VOTE_TYPE.REFRAINED)
self.assertEqual(forum_models.Post.objects.all().count(), 1)
with self.check_not_changed(lambda: self.place1.attrs.stability):
self.assertFalse(self.bill.apply())
self.assertTrue(self.bill.state.is_REJECTED)
self.assertEqual(forum_models.Post.objects.all().count(), 2)
bill = prototypes.BillPrototype.get_by_id(self.bill.id)
self.assertTrue(bill.state.is_REJECTED)
places_storage.places.sync(force=True)
self.place1.refresh_attributes()
self.assertEqual(bill.applyed_at_turn, game_turn.number())
self.check_place(self.place1.id, self.place1.name, self.place1.utg_name.forms)
page, total_records, events = chronicle_tt_services.chronicle.cmd_get_events(tags=(), page=1, records_on_page=100)
self.assertEqual(total_records, 0)
def update_and_approve(self):
##################################
# set name forms
data = self.bill.user_form_initials
data.update(linguistics_helpers.get_word_post_data(self.bill.data.name_forms, prefix='name'))
data['approved'] = True
form = self.bill.data.get_moderator_form_update(data)
self.assertTrue(form.is_valid())
self.bill.update_by_moderator(form, self.account1)
##################################
def prepair_data_to_approve(self):
prototypes.VotePrototype.create(self.account2, self.bill, relations.VOTE_TYPE.AGAINST)
prototypes.VotePrototype.create(self.account3, self.bill, relations.VOTE_TYPE.FOR)
prototypes.VotePrototype.create(self.account4, self.bill, relations.VOTE_TYPE.REFRAINED)
self.update_and_approve()
def test_update_by_moderator(self):
with self.check_increased(models.Moderation.objects.filter(bill_id=self.bill.id, moderator_id=self.account1.id).count):
self.update_and_approve()
with self.check_increased(models.Moderation.objects.filter(bill_id=self.bill.id, moderator_id=self.account1.id).count):
self.update_and_approve()
def test_remove_by_moderator(self):
self.assertNotEqual(self.bill.owner_id, self.account2.id)
with self.check_increased(models.Moderation.objects.filter(bill_id=self.bill.id, moderator_id=self.account2.id).count):
self.bill.remove(self.account2)
def test_remove_by_owner(self):
self.assertEqual(self.bill.owner_id, self.account1.id)
with self.check_not_changed(models.Moderation.objects.all().count):
self.bill.remove(self.account1)
@mock.patch('the_tale.game.bills.conf.settings.MIN_VOTES_PERCENT', 0.6)
@mock.patch('the_tale.game.bills.prototypes.BillPrototype.time_before_voting_end', datetime.timedelta(seconds=0))
def test_approved(self):
game_turn.increment()
game_turn.increment()
game_turn.increment()
self.prepair_data_to_approve()
with self.check_delta(forum_models.Post.objects.all().count, 1):
self.assertTrue(self.bill.apply())
self.assertTrue(self.bill.state.is_ACCEPTED)
bill = prototypes.BillPrototype.get_by_id(self.bill.id)
self.assertTrue(bill.state.is_ACCEPTED)
places_storage.places.sync(force=True)
self.place1.refresh_attributes()
self.assertTrue(self.place1.attrs.stability < 1.0)
self.assertEqual(bill.applyed_at_turn, game_turn.number())
self.check_place(self.place1.id, 'new_name_1-нс,ед,им', self.bill.data.name_forms.forms)
@mock.patch('the_tale.game.bills.conf.settings.MIN_VOTES_PERCENT', 0.6)
@mock.patch('the_tale.game.bills.prototypes.BillPrototype.time_before_voting_end', datetime.timedelta(seconds=0))
def test_achievements(self):
self.prepair_data_to_approve()
with mock.patch('the_tale.accounts.achievements.storage.AchievementsStorage.verify_achievements') as verify_achievements:
self.assertTrue(self.bill.apply())
self.assertEqual(verify_achievements.call_args_list,
[mock.call(account_id=self.account1.id,
type=achievements_relations.ACHIEVEMENT_TYPE.POLITICS_ACCEPTED_BILLS,
old_value=0,
new_value=1)])
@mock.patch('the_tale.game.bills.conf.settings.MIN_VOTES_PERCENT', 0.6)
@mock.patch('the_tale.game.bills.prototypes.BillPrototype.time_before_voting_end', datetime.timedelta(seconds=0))
def test_chronicle(self):
chronicle_tt_services.chronicle.cmd_debug_clear_service()
self.prepair_data_to_approve()
self.assertTrue(self.bill.apply())
page, total_records, events = chronicle_tt_services.chronicle.cmd_get_events(tags=(), page=1, records_on_page=100)
self.assertEqual(total_records, 1)
self.assertEqual(events[0].message, self.bill.chronicle_on_accepted)
class TestPrototypeStop(helpers.BaseTestPrototypes):
def setUp(self):
super(TestPrototypeStop, self).setUp()
bill_data = bills.place_renaming.PlaceRenaming(place_id=self.place1.id, name_forms=game_names.generator().get_test_name('new_name_1'))
self.bill = prototypes.BillPrototype.create(self.account1, 'bill-1-caption', bill_data, chronicle_on_accepted='chronicle-on-accepted')
self.bill.approved_by_moderator = True
self.bill.save()
def test_wrong_state(self):
self.bill.state = relations.BILL_STATE.ACCEPTED
self.bill.save()
self.assertRaises(exceptions.StopBillInWrongStateError, self.bill.stop)
def test_stopped(self):
with self.check_delta(forum_models.Post.objects.all().count, 1):
self.bill.stop()
self.assertTrue(self.bill.state.is_STOPPED)
class TestPrototypeEnd(helpers.BaseTestPrototypes):
def setUp(self):
super(TestPrototypeEnd, self).setUp()
bill_data = bills.place_renaming.PlaceRenaming(place_id=self.place1.id, name_forms=game_names.generator().get_test_name('new_name_1'))
self.bill = prototypes.BillPrototype.create(self.account1, 'bill-1-caption', bill_data, chronicle_on_accepted='chronicle-on-accepted')
self.bill.state = relations.BILL_STATE.ACCEPTED
game_turn.increment()
def test_not_accepted(self):
for state in relations.BILL_STATE.records:
if state.is_ACCEPTED:
continue
self.bill.state = state
with mock.patch('the_tale.game.bills.bills.base_bill.BaseBill.end') as end:
self.assertRaises(exceptions.EndBillInWrongStateError, self.bill.end)
self.assertEqual(end.call_count, 0)
def test_already_ended(self):
self.bill._model.ended_at = datetime.datetime.now()
with mock.patch('the_tale.game.bills.bills.base_bill.BaseBill.end') as end:
self.assertRaises(exceptions.EndBillAlreadyEndedError, self.bill.end)
self.assertEqual(end.call_count, 0)
def test_success(self):
with mock.patch('the_tale.game.bills.bills.base_bill.BaseBill.end') as end:
self.bill.end()
self.assertEqual(end.call_count, 1)
class GetApplicableBillsTest(helpers.BaseTestPrototypes):
def setUp(self):
super(GetApplicableBillsTest, self).setUp()
self.bill_data = bills.place_description.PlaceDescripton(place_id=self.place1.id, description='description')
self.bill_1 = prototypes.BillPrototype.create(self.account1, 'bill-1-caption', self.bill_data, chronicle_on_accepted='chronicle-on-accepted')
self.bill_2 = prototypes.BillPrototype.create(self.account1, 'bill-1-caption', self.bill_data, chronicle_on_accepted='chronicle-on-accepted')
self.bill_3 = prototypes.BillPrototype.create(self.account1, 'bill-1-caption', self.bill_data, chronicle_on_accepted='chronicle-on-accepted')
prototypes.BillPrototype._model_class.objects.all().update(updated_at=datetime.datetime.now() - datetime.timedelta(seconds=conf.settings.BILL_LIVE_TIME),
approved_by_moderator=True)
self.bill_1.reload()
self.bill_2.reload()
self.bill_3.reload()
def test_all(self):
self.assertEqual(set(prototypes.BillPrototype.get_applicable_bills_ids()),
set((self.bill_1.id, self.bill_2.id, self.bill_3.id)))
def test_wrong_state(self):
for state in relations.BILL_STATE.records:
if state.is_VOTING:
continue
self.bill_1.state = state
self.bill_1.save()
self.assertEqual(set(prototypes.BillPrototype.get_applicable_bills_ids()), set((self.bill_2.id, self.bill_3.id)))
def test_approved_by_moderator(self):
self.bill_2.approved_by_moderator = False
self.bill_2.save()
self.assertEqual(set(prototypes.BillPrototype.get_applicable_bills_ids()), set((self.bill_1.id, self.bill_3.id)))
def test_voting_not_ended(self):
self.bill_3._model.updated_at = datetime.datetime.now()
self.bill_3.save()
self.assertEqual(set(prototypes.BillPrototype.get_applicable_bills_ids()), set((self.bill_1.id, self.bill_2.id)))
class TestActorPrototype(helpers.BaseTestPrototypes):
def setUp(self):
super(TestActorPrototype, self).setUp()
self.bill_data = bills.place_renaming.PlaceRenaming(place_id=self.place1.id, name_forms=game_names.generator().get_test_name('new_name_1'))
self.bill = prototypes.BillPrototype.create(self.account1, 'bill-1-caption', self.bill_data, chronicle_on_accepted='chronicle-on-accepted')
def test_actors_created(self):
self.assertTrue(models.Actor.objects.all().exists())
def test_actors_after_user_update(self):
old_actors_timestamps = list(models.Actor.objects.all().values_list('created_at', flat=True))
noun = game_names.generator().get_test_name('new-new-name')
data = linguistics_helpers.get_word_post_data(noun, prefix='name')
data.update({'caption': 'new-caption',
'chronicle_on_accepted': 'chronicle-on-accepted-2',
'place': self.place2.id})
form = bills.place_renaming.PlaceRenaming.UserForm(data)
self.assertTrue(form.is_valid())
self.bill.update(form)
new_actors_timestamps = list(models.Actor.objects.all().values_list('created_at', flat=True))
self.assertFalse(set(old_actors_timestamps) & set(new_actors_timestamps))
self.assertTrue(new_actors_timestamps)
class TestVotePrototype(helpers.BaseTestPrototypes):
def setUp(self):
super(TestVotePrototype, self).setUp()
bill_data = bills.place_renaming.PlaceRenaming(place_id=self.place1.id, name_forms=game_names.generator().get_test_name('new_name_1'))
self.bill = prototypes.BillPrototype.create(self.account1, 'bill-1-caption', bill_data, chronicle_on_accepted='chronicle-on-accepted')
self.bill.approved_by_moderator = True
self.bill.save()
def test_votes_count(self):
prototypes.VotePrototype.create(self.account2, self.bill, relations.VOTE_TYPE.AGAINST)
prototypes.VotePrototype.create(self.account3, self.bill, relations.VOTE_TYPE.REFRAINED)
self.assertEqual(prototypes.VotePrototype.votes_count(self.account1.id), 1)
self.assertEqual(prototypes.VotePrototype.votes_count(self.account2.id), 1)
self.assertEqual(prototypes.VotePrototype.votes_count(self.account3.id), 1)
self.assertEqual(prototypes.VotePrototype.votes_count(self.account4.id), 0)
def test_votes_for_count(self):
prototypes.VotePrototype.create(self.account2, self.bill, relations.VOTE_TYPE.AGAINST)
prototypes.VotePrototype.create(self.account3, self.bill, relations.VOTE_TYPE.REFRAINED)
self.assertEqual(prototypes.VotePrototype.votes_for_count(self.account1.id), 1)
self.assertEqual(prototypes.VotePrototype.votes_for_count(self.account2.id), 0)
self.assertEqual(prototypes.VotePrototype.votes_for_count(self.account3.id), 0)
self.assertEqual(prototypes.VotePrototype.votes_for_count(self.account4.id), 0)
def test_votes_agains_count(self):
prototypes.VotePrototype.create(self.account2, self.bill, relations.VOTE_TYPE.AGAINST)
prototypes.VotePrototype.create(self.account3, self.bill, relations.VOTE_TYPE.REFRAINED)
self.assertEqual(prototypes.VotePrototype.votes_against_count(self.account1.id), 0)
self.assertEqual(prototypes.VotePrototype.votes_against_count(self.account2.id), 1)
self.assertEqual(prototypes.VotePrototype.votes_against_count(self.account3.id), 0)
self.assertEqual(prototypes.VotePrototype.votes_against_count(self.account4.id), 0)
def test_vote_for_achievements(self):
with mock.patch('the_tale.accounts.achievements.storage.AchievementsStorage.verify_achievements') as verify_achievements:
prototypes.VotePrototype.create(self.account2, self.bill, relations.VOTE_TYPE.FOR)
self.assertEqual(verify_achievements.call_args_list, [mock.call(account_id=self.account2.id,
type=achievements_relations.ACHIEVEMENT_TYPE.POLITICS_VOTES_TOTAL,
old_value=0,
new_value=1),
mock.call(account_id=self.account2.id,
type=achievements_relations.ACHIEVEMENT_TYPE.POLITICS_VOTES_FOR,
old_value=0,
new_value=1),
mock.call(account_id=self.account2.id,
type=achievements_relations.ACHIEVEMENT_TYPE.POLITICS_VOTES_AGAINST,
old_value=0,
new_value=0)])
def test_vote_agains_achievements(self):
with mock.patch('the_tale.accounts.achievements.storage.AchievementsStorage.verify_achievements') as verify_achievements:
prototypes.VotePrototype.create(self.account2, self.bill, relations.VOTE_TYPE.AGAINST)
self.assertEqual(verify_achievements.call_args_list, [mock.call(account_id=self.account2.id,
type=achievements_relations.ACHIEVEMENT_TYPE.POLITICS_VOTES_TOTAL,
old_value=0,
new_value=1),
mock.call(account_id=self.account2.id,
type=achievements_relations.ACHIEVEMENT_TYPE.POLITICS_VOTES_FOR,
old_value=0,
new_value=0),
mock.call(account_id=self.account2.id,
type=achievements_relations.ACHIEVEMENT_TYPE.POLITICS_VOTES_AGAINST,
old_value=0,
new_value=1)])
def test_vote_refrained_achievements(self):
with mock.patch('the_tale.accounts.achievements.storage.AchievementsStorage.verify_achievements') as verify_achievements:
prototypes.VotePrototype.create(self.account2, self.bill, relations.VOTE_TYPE.REFRAINED)
self.assertEqual(verify_achievements.call_args_list, [mock.call(account_id=self.account2.id,
type=achievements_relations.ACHIEVEMENT_TYPE.POLITICS_VOTES_TOTAL,
old_value=0,
new_value=1),
mock.call(account_id=self.account2.id,
type=achievements_relations.ACHIEVEMENT_TYPE.POLITICS_VOTES_FOR,
old_value=0,
new_value=0),
mock.call(account_id=self.account2.id,
type=achievements_relations.ACHIEVEMENT_TYPE.POLITICS_VOTES_AGAINST,
old_value=0,
new_value=0)])
| 48.396078 | 161 | 0.650312 | 2,855 | 24,682 | 5.371979 | 0.083012 | 0.052161 | 0.021908 | 0.02921 | 0.815088 | 0.784052 | 0.732412 | 0.707831 | 0.689444 | 0.656061 | 0 | 0.011506 | 0.246455 | 24,682 | 509 | 162 | 48.491159 | 0.813108 | 0.000567 | 0 | 0.487252 | 0 | 0 | 0.087531 | 0.075863 | 0 | 0 | 0 | 0 | 0.1983 | 1 | 0.13881 | false | 0 | 0.005666 | 0 | 0.167139 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
60e6c89ebbdb7d0f0813b824565edd0296014b9f | 144 | py | Python | tests/scratch/scratch2.py | gabicavalcante/pymoo | 1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846 | [
"Apache-2.0"
] | 11 | 2018-05-22T17:38:02.000Z | 2022-02-28T03:34:33.000Z | tests/scratch/scratch2.py | gabicavalcante/pymoo | 1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846 | [
"Apache-2.0"
] | 15 | 2022-01-03T19:36:36.000Z | 2022-03-30T03:57:58.000Z | tests/scratch/scratch2.py | gabicavalcante/pymoo | 1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846 | [
"Apache-2.0"
] | 3 | 2021-11-22T08:01:47.000Z | 2022-03-11T08:53:58.000Z | from pymoo.factory import get_reference_directions
ref_dirs = get_reference_directions("das-dennis", 10, n_partitions=15)
print(len(ref_dirs)) | 28.8 | 70 | 0.826389 | 22 | 144 | 5.090909 | 0.772727 | 0.214286 | 0.392857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030075 | 0.076389 | 144 | 5 | 71 | 28.8 | 0.81203 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
60f20923fb1640fee37e384e413c33b397932da4 | 58 | py | Python | cs250/test.py | icterguru/DrLutchClass | 4ae75e047d00e36af7fd5019a7d751a44bc7daa8 | [
"Apache-2.0"
] | null | null | null | cs250/test.py | icterguru/DrLutchClass | 4ae75e047d00e36af7fd5019a7d751a44bc7daa8 | [
"Apache-2.0"
] | null | null | null | cs250/test.py | icterguru/DrLutchClass | 4ae75e047d00e36af7fd5019a7d751a44bc7daa8 | [
"Apache-2.0"
] | 1 | 2018-09-20T20:50:08.000Z | 2018-09-20T20:50:08.000Z | print('hi')
print ("Hello my friend, what do you do??")
| 11.6 | 43 | 0.62069 | 10 | 58 | 3.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189655 | 58 | 4 | 44 | 14.5 | 0.765957 | 0 | 0 | 0 | 0 | 0 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
60fb4fb61efa431a3eeb7cc15f5708156f57ad85 | 131 | py | Python | tests/test_placeholder.py | tagordon/exomoons | 37bfed8ca9943b7ca43d85123e7a96c4452000e7 | [
"MIT"
] | 18 | 2018-10-27T20:13:45.000Z | 2021-01-20T23:42:09.000Z | tests/test_placeholder.py | tagordon/exomoons | 37bfed8ca9943b7ca43d85123e7a96c4452000e7 | [
"MIT"
] | 7 | 2018-10-25T21:33:13.000Z | 2019-10-15T15:42:50.000Z | tests/test_placeholder.py | tagordon/exomoons | 37bfed8ca9943b7ca43d85123e7a96c4452000e7 | [
"MIT"
] | 11 | 2018-11-08T20:58:33.000Z | 2021-04-08T19:21:54.000Z | """A placeholder file for unit tests."""
def test_commutation():
"""Test that math still works."""
assert 1 + 2 == 2 + 1
| 18.714286 | 40 | 0.603053 | 19 | 131 | 4.105263 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0.236641 | 131 | 6 | 41 | 21.833333 | 0.74 | 0.473282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
880120554b3c80d8090077afd16b21788f0055ae | 67 | py | Python | terminal/tasks.py | creditease-natrix/natrix | 8b97efdc9287645ea6b99dcf3a99fbe3f6ba6862 | [
"MIT"
] | 3 | 2019-06-28T02:25:10.000Z | 2019-12-16T08:50:08.000Z | terminal/tasks.py | creditease-natrix/natrix | 8b97efdc9287645ea6b99dcf3a99fbe3f6ba6862 | [
"MIT"
] | 3 | 2020-02-12T00:17:22.000Z | 2021-06-10T21:29:11.000Z | terminal/tasks.py | creditease-natrix/natrix | 8b97efdc9287645ea6b99dcf3a99fbe3f6ba6862 | [
"MIT"
] | 1 | 2019-06-22T06:04:59.000Z | 2019-06-22T06:04:59.000Z | # -*- coding: utf-8 -*-
"""
"""
from terminal.services import *
| 8.375 | 31 | 0.537313 | 7 | 67 | 5.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018868 | 0.208955 | 67 | 7 | 32 | 9.571429 | 0.660377 | 0.313433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
71485dadb355eb9372f10277fea72a4d2d222362 | 24 | py | Python | imrscrape/__init__.py | apdforward/imr-scrape | 6fecfdf8b239a903d12f3086aaeb5c0cc640f95c | [
"Apache-2.0"
] | null | null | null | imrscrape/__init__.py | apdforward/imr-scrape | 6fecfdf8b239a903d12f3086aaeb5c0cc640f95c | [
"Apache-2.0"
] | null | null | null | imrscrape/__init__.py | apdforward/imr-scrape | 6fecfdf8b239a903d12f3086aaeb5c0cc640f95c | [
"Apache-2.0"
] | null | null | null | from .main import scrape | 24 | 24 | 0.833333 | 4 | 24 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
714f14815dec56e342cf97c312d08951a3edb901 | 517 | py | Python | audit_log/registration.py | FairWindCo/django-audit-log | 267234b231e623884125b1bbb8557ad98a8ca5bb | [
"BSD-3-Clause"
] | null | null | null | audit_log/registration.py | FairWindCo/django-audit-log | 267234b231e623884125b1bbb8557ad98a8ca5bb | [
"BSD-3-Clause"
] | null | null | null | audit_log/registration.py | FairWindCo/django-audit-log | 267234b231e623884125b1bbb8557ad98a8ca5bb | [
"BSD-3-Clause"
] | null | null | null | class FieldRegistry(object):
_registry = {}
def __init__(self, field_cls):
self._field_cls = field_cls
def add_field(self, model, field):
reg = self.__class__._registry.setdefault(self._field_cls, {}).setdefault(model, [])
reg.append(field)
def get_fields(self, model):
return self.__class__._registry.setdefault(self._field_cls, {}).get(model, [])
def __contains__(self, model):
return model in self.__class__._registry.setdefault(self._field_cls, {})
| 32.3125 | 92 | 0.675048 | 62 | 517 | 5.048387 | 0.306452 | 0.153355 | 0.191693 | 0.258786 | 0.373802 | 0.373802 | 0.373802 | 0 | 0 | 0 | 0 | 0 | 0.193424 | 517 | 15 | 93 | 34.466667 | 0.7506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0 | 0.181818 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
717ed6e1a011a4ba253067cef6118c9e7f040988 | 1,341 | py | Python | py_insightvm_sdk/api/__init__.py | greenpau/py_insightvm_sdk | bd881f26e14cb9f0f9c47927469ec992de9de8e6 | [
"Apache-2.0"
] | 2 | 2019-03-15T16:05:54.000Z | 2020-07-19T18:37:50.000Z | py_insightvm_sdk/api/__init__.py | greenpau/py_insightvm_sdk | bd881f26e14cb9f0f9c47927469ec992de9de8e6 | [
"Apache-2.0"
] | 1 | 2021-03-26T04:46:12.000Z | 2021-03-26T04:51:23.000Z | py_insightvm_sdk/api/__init__.py | greenpau/py_insightvm_sdk | bd881f26e14cb9f0f9c47927469ec992de9de8e6 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from py_insightvm_sdk.api.administration_api import AdministrationApi
from py_insightvm_sdk.api.asset_api import AssetApi
from py_insightvm_sdk.api.asset_discovery_api import AssetDiscoveryApi
from py_insightvm_sdk.api.asset_group_api import AssetGroupApi
from py_insightvm_sdk.api.credential_api import CredentialApi
from py_insightvm_sdk.api.policy_api import PolicyApi
from py_insightvm_sdk.api.policy_override_api import PolicyOverrideApi
from py_insightvm_sdk.api.remediation_api import RemediationApi
from py_insightvm_sdk.api.report_api import ReportApi
from py_insightvm_sdk.api.root_api import RootApi
from py_insightvm_sdk.api.scan_api import ScanApi
from py_insightvm_sdk.api.scan_engine_api import ScanEngineApi
from py_insightvm_sdk.api.scan_template_api import ScanTemplateApi
from py_insightvm_sdk.api.site_api import SiteApi
from py_insightvm_sdk.api.tag_api import TagApi
from py_insightvm_sdk.api.user_api import UserApi
from py_insightvm_sdk.api.vulnerability_api import VulnerabilityApi
from py_insightvm_sdk.api.vulnerability_check_api import VulnerabilityCheckApi
from py_insightvm_sdk.api.vulnerability_exception_api import VulnerabilityExceptionApi
from py_insightvm_sdk.api.vulnerability_result_api import VulnerabilityResultApi
| 51.576923 | 86 | 0.897092 | 200 | 1,341 | 5.65 | 0.265 | 0.106195 | 0.265487 | 0.318584 | 0.452212 | 0.30354 | 0 | 0 | 0 | 0 | 0 | 0.000803 | 0.070843 | 1,341 | 25 | 87 | 53.64 | 0.9061 | 0.030574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
718c548a71c92d933d0436999370ff191bee59a5 | 190 | py | Python | Contributors/BryanYunis/solutions/coin_flip.py | FergusDevelopmentLLC/Coders-Workshop | 3513bd5f79eaa85b4d2a648c5f343a224842325d | [
"MIT"
] | 33 | 2019-12-02T23:29:47.000Z | 2022-03-24T02:40:36.000Z | Contributors/BryanYunis/solutions/coin_flip.py | FergusDevelopmentLLC/Coders-Workshop | 3513bd5f79eaa85b4d2a648c5f343a224842325d | [
"MIT"
] | 39 | 2020-01-15T19:28:12.000Z | 2021-11-26T05:13:29.000Z | Contributors/BryanYunis/solutions/coin_flip.py | FergusDevelopmentLLC/Coders-Workshop | 3513bd5f79eaa85b4d2a648c5f343a224842325d | [
"MIT"
] | 49 | 2019-12-02T23:29:53.000Z | 2022-03-03T01:11:37.000Z | # for a solution using recursion, see the JavaScript solution. Otherwise, the solution can simply be to return the log of n
from math import log
def coins(n):
return log(n, 2)
| 23.75 | 124 | 0.710526 | 32 | 190 | 4.21875 | 0.71875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006897 | 0.236842 | 190 | 7 | 125 | 27.142857 | 0.924138 | 0.636842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
71d8e2106fdfbe96b2edf280494ce92ec9fb61bb | 819 | py | Python | pava/implementation/natives/sun/java2d/pipe/SpanClipRenderer.py | laffra/pava | 54d10cf7f8def2f96e254c0356623d08f221536f | [
"MIT"
] | 4 | 2017-03-30T16:51:16.000Z | 2020-10-05T12:25:47.000Z | pava/implementation/natives/sun/java2d/pipe/SpanClipRenderer.py | laffra/pava | 54d10cf7f8def2f96e254c0356623d08f221536f | [
"MIT"
] | null | null | null | pava/implementation/natives/sun/java2d/pipe/SpanClipRenderer.py | laffra/pava | 54d10cf7f8def2f96e254c0356623d08f221536f | [
"MIT"
] | null | null | null | def add_native_methods(clazz):
def initIDs__java_lang_Class__java_lang_Class__(a0, a1):
raise NotImplementedError()
def fillTile__sun_java2d_pipe_RegionIterator__byte____int__int__int____(a0, a1, a2, a3, a4, a5):
raise NotImplementedError()
def eraseTile__sun_java2d_pipe_RegionIterator__byte____int__int__int____(a0, a1, a2, a3, a4, a5):
raise NotImplementedError()
clazz.initIDs__java_lang_Class__java_lang_Class__ = staticmethod(initIDs__java_lang_Class__java_lang_Class__)
clazz.fillTile__sun_java2d_pipe_RegionIterator__byte____int__int__int____ = fillTile__sun_java2d_pipe_RegionIterator__byte____int__int__int____
clazz.eraseTile__sun_java2d_pipe_RegionIterator__byte____int__int__int____ = eraseTile__sun_java2d_pipe_RegionIterator__byte____int__int__int____
| 54.6 | 149 | 0.849817 | 107 | 819 | 5.140187 | 0.242991 | 0.130909 | 0.141818 | 0.294545 | 0.84 | 0.84 | 0.84 | 0.66 | 0.66 | 0.276364 | 0 | 0.027285 | 0.105006 | 819 | 14 | 150 | 58.5 | 0.723056 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
71deb1e98bf5d251aa01a43d5664d0296198daad | 97 | py | Python | h2o-py/h2o/utils/__init__.py | gkirok/h2o-3 | b128be6644c685cb6059444a1dbd5106f76f2672 | [
"Apache-2.0"
] | 2 | 2019-09-02T15:49:45.000Z | 2019-09-02T16:01:58.000Z | h2o-py/h2o/utils/__init__.py | gkirok/h2o-3 | b128be6644c685cb6059444a1dbd5106f76f2672 | [
"Apache-2.0"
] | 2 | 2021-06-02T02:24:03.000Z | 2021-11-15T17:51:49.000Z | h2o-py/h2o/utils/__init__.py | gkirok/h2o-3 | b128be6644c685cb6059444a1dbd5106f76f2672 | [
"Apache-2.0"
] | 1 | 2021-05-23T07:41:39.000Z | 2021-05-23T07:41:39.000Z | from .shared_utils import mojo_predict_csv
__all__ = ('mojo_predict_csv', 'mojo_predict_pandas') | 32.333333 | 53 | 0.824742 | 14 | 97 | 4.928571 | 0.642857 | 0.478261 | 0.405797 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082474 | 97 | 3 | 53 | 32.333333 | 0.775281 | 0 | 0 | 0 | 0 | 0 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e0804df03fc9278ae4e55e13608a342d454738ce | 1,314 | py | Python | test/test_generic_event.py | CiscoDevNet/python-msx-sdk | d7e0a08c656504b4f4551d263e67c671a2a04b3f | [
"MIT"
] | null | null | null | test/test_generic_event.py | CiscoDevNet/python-msx-sdk | d7e0a08c656504b4f4551d263e67c671a2a04b3f | [
"MIT"
] | null | null | null | test/test_generic_event.py | CiscoDevNet/python-msx-sdk | d7e0a08c656504b4f4551d263e67c671a2a04b3f | [
"MIT"
] | null | null | null | """
MSX SDK
MSX SDK client. # noqa: E501
The version of the OpenAPI document: 1.0.9
Generated by: https://openapi-generator.tech
"""
import sys
import unittest
import python_msx_sdk
from python_msx_sdk.model.generic_event_all_of import GenericEventAllOf
from python_msx_sdk.model.generic_event_create import GenericEventCreate
from python_msx_sdk.model.generic_event_security import GenericEventSecurity
from python_msx_sdk.model.generic_event_severity import GenericEventSeverity
from python_msx_sdk.model.generic_event_trace import GenericEventTrace
globals()['GenericEventAllOf'] = GenericEventAllOf
globals()['GenericEventCreate'] = GenericEventCreate
globals()['GenericEventSecurity'] = GenericEventSecurity
globals()['GenericEventSeverity'] = GenericEventSeverity
globals()['GenericEventTrace'] = GenericEventTrace
from python_msx_sdk.model.generic_event import GenericEvent
class TestGenericEvent(unittest.TestCase):
"""GenericEvent unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testGenericEvent(self):
"""Test GenericEvent"""
# FIXME: construct object with mandatory attributes with example values
# model = GenericEvent() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 28.565217 | 79 | 0.765601 | 145 | 1,314 | 6.703448 | 0.413793 | 0.055556 | 0.08642 | 0.098765 | 0.203704 | 0.203704 | 0.203704 | 0 | 0 | 0 | 0 | 0.008094 | 0.153729 | 1,314 | 45 | 80 | 29.2 | 0.866007 | 0.213851 | 0 | 0.130435 | 1 | 0 | 0.100301 | 0 | 0 | 0 | 0 | 0.022222 | 0 | 1 | 0.130435 | false | 0.130435 | 0.391304 | 0 | 0.565217 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
e0962f66ec449d89544f008c5d68374417a7a91e | 117 | py | Python | iwy_ok.py | TRMIKO/I-Watch-you | 99e29ccc7f917a00ba30397a325013fcd27e7519 | [
"MIT"
] | null | null | null | iwy_ok.py | TRMIKO/I-Watch-you | 99e29ccc7f917a00ba30397a325013fcd27e7519 | [
"MIT"
] | null | null | null | iwy_ok.py | TRMIKO/I-Watch-you | 99e29ccc7f917a00ba30397a325013fcd27e7519 | [
"MIT"
] | null | null | null | import telepot
bot = telepot.Bot('440619284:AAFvygLY53ZjqgGuk8DJB399Xfu2Rx8YT-s')
bot.sendMessage(361114126, 'okay')
| 29.25 | 66 | 0.820513 | 12 | 117 | 8 | 0.75 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236364 | 0.059829 | 117 | 3 | 67 | 39 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0.418803 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e0c0287897b1bfed04d34cd9964d821308e6c7b4 | 39,538 | py | Python | pioreactor/tests/test_dosing_control.py | CamDavidsonPilon/morbidostat | c6b0f397faf88144087d97047e6f6da90d8d1068 | [
"MIT"
] | 1 | 2020-11-02T14:34:59.000Z | 2020-11-02T14:34:59.000Z | pioreactor/tests/test_dosing_control.py | CamDavidsonPilon/morbidostat | c6b0f397faf88144087d97047e6f6da90d8d1068 | [
"MIT"
] | 1 | 2020-11-21T01:35:02.000Z | 2020-11-21T01:35:02.000Z | pioreactor/tests/test_dosing_control.py | CamDavidsonPilon/morbidostat | c6b0f397faf88144087d97047e6f6da90d8d1068 | [
"MIT"
] | 1 | 2020-11-12T04:02:24.000Z | 2020-11-12T04:02:24.000Z | # -*- coding: utf-8 -*-
from __future__ import annotations
import json
import time
from datetime import datetime
from datetime import timedelta
from typing import Any
import pytest
from pioreactor import exc
from pioreactor import pubsub
from pioreactor.automations import DosingAutomationJob
from pioreactor.automations import events
from pioreactor.automations.dosing.base import AltMediaCalculator
from pioreactor.automations.dosing.continuous_cycle import ContinuousCycle
from pioreactor.automations.dosing.morbidostat import Morbidostat
from pioreactor.automations.dosing.pid_morbidostat import PIDMorbidostat
from pioreactor.automations.dosing.pid_turbidostat import PIDTurbidostat
from pioreactor.automations.dosing.silent import Silent
from pioreactor.automations.dosing.turbidostat import Turbidostat
from pioreactor.background_jobs.dosing_control import DosingController
from pioreactor.utils import local_persistant_storage
from pioreactor.utils.timing import current_utc_timestamp
from pioreactor.whoami import get_unit_name
unit = get_unit_name()
def pause() -> None:
# to avoid race conditions when updating state
time.sleep(0.5)
def setup_function() -> None:
with local_persistant_storage("pump_calibration") as cache:
cache["media_ml_calibration"] = json.dumps(
{"duration_": 1.0, "bias_": 0, "dc": 60, "hz": 100, "timestamp": "2010-01-01"}
)
cache["alt_media_ml_calibration"] = json.dumps(
{"duration_": 1.0, "bias_": 0, "dc": 60, "hz": 100, "timestamp": "2010-01-01"}
)
cache["waste_ml_calibration"] = json.dumps(
{"duration_": 1.0, "bias_": 0, "dc": 60, "hz": 100, "timestamp": "2010-01-01"}
)
def test_silent_automation() -> None:
experiment = "test_silent_automation"
with Silent(volume=None, duration=60, unit=unit, experiment=experiment) as algo:
pause()
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 1.0, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.NoEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.02, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 1.1, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.NoEvent)
def test_turbidostat_automation() -> None:
experiment = "test_turbidostat_automation"
target_od = 1.0
with Turbidostat(
target_od=target_od, duration=60, volume=0.25, unit=unit, experiment=experiment
) as algo:
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.98, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.NoEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 1.0, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.DilutionEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 1.01, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.DilutionEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.99, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.NoEvent)
def test_pid_turbidostat_automation() -> None:
experiment = "test_pid_turbidostat_automation"
target_od = 2.4
with PIDTurbidostat(target_od=target_od, duration=20, unit=unit, experiment=experiment) as algo:
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 2.6, "timestamp": current_utc_timestamp()}),
)
pause()
e = algo.run()
assert isinstance(e, events.DilutionEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 2.8, "timestamp": current_utc_timestamp()}),
)
pause()
e = algo.run()
assert isinstance(e, events.DilutionEvent)
def test_morbidostat_automation() -> None:
experiment = "test_morbidostat_automation"
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
None,
retain=True,
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
None,
retain=True,
)
target_od = 1.0
algo = Morbidostat(
target_od=target_od, duration=60, volume=0.25, unit=unit, experiment=experiment
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.95, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.NoEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.99, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.DilutionEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 1.05, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.AddAltMediaEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 1.03, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.DilutionEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 1.04, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.AddAltMediaEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.01, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.99, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.DilutionEvent)
algo.clean_up()
def test_pid_morbidostat_automation() -> None:
experiment = "test_pid_morbidostat_automation"
target_growth_rate = 0.09
algo = PIDMorbidostat(
target_od=1.0,
target_growth_rate=target_growth_rate,
duration=60,
unit=unit,
experiment=experiment,
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.08, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.5, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.NoEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.08, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.95, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.AddAltMediaEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.07, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.95, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.AddAltMediaEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.065, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.95, "timestamp": current_utc_timestamp()}),
)
pause()
assert isinstance(algo.run(), events.AddAltMediaEvent)
algo.clean_up()
def test_changing_morbidostat_parameters_over_mqtt() -> None:
experiment = "test_changing_morbidostat_parameters_over_mqtt"
target_growth_rate = 0.05
algo = PIDMorbidostat(
target_growth_rate=target_growth_rate,
target_od=1.0,
duration=60,
unit=unit,
experiment=experiment,
)
assert algo.target_growth_rate == target_growth_rate
pause()
new_target = 0.07
pubsub.publish(
f"pioreactor/{unit}/{experiment}/dosing_automation/target_growth_rate/set",
new_target,
)
pause()
assert algo.target_growth_rate == new_target
assert algo.pid.pid.setpoint == new_target
algo.clean_up()
def test_changing_turbidostat_params_over_mqtt() -> None:
experiment = "test_changing_turbidostat_params_over_mqtt"
og_volume = 0.5
og_target_od = 1.0
algo = Turbidostat(
volume=og_volume,
target_od=og_target_od,
duration=60,
unit=unit,
experiment=experiment,
)
assert algo.volume == og_volume
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.05, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 1.0, "timestamp": current_utc_timestamp()}),
)
pause()
algo.run()
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_automation/volume/set", 1.0)
pause()
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.05, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 1.0, "timestamp": current_utc_timestamp()}),
)
algo.run()
assert algo.volume == 1.0
new_od = 1.5
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_automation/target_od/set", new_od)
pause()
assert algo.target_od == new_od
algo.clean_up()
def test_changing_parameters_over_mqtt_with_unknown_parameter() -> None:
experiment = "test_changing_parameters_over_mqtt_with_unknown_parameter"
with pubsub.collect_all_logs_of_level("DEBUG", unit, experiment) as bucket:
with DosingAutomationJob(
target_growth_rate=0.05,
target_od=1.0,
duration=60,
unit=unit,
experiment=experiment,
):
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_automation/garbage/set", 0.07)
# there should be a log published with "Unable to set garbage in dosing_automation"
pause()
pause()
pause()
assert len(bucket) > 0
assert any(["garbage" in log["message"] for log in bucket])
def test_pause_in_dosing_automation() -> None:
experiment = "test_pause_in_dosing_automation"
with DosingAutomationJob(
target_growth_rate=0.05,
target_od=1.0,
duration=60,
unit=unit,
experiment=experiment,
) as algo:
pause()
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_automation/$state/set", "sleeping")
pause()
assert algo.state == "sleeping"
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_automation/$state/set", "ready")
pause()
assert algo.state == "ready"
def test_pause_in_dosing_control_also_pauses_automation() -> None:
experiment = "test_pause_in_dosing_control_also_pauses_automation"
algo = DosingController(
"turbidostat",
target_od=1.0,
duration=5 / 60,
volume=1.0,
unit=unit,
experiment=experiment,
)
pause()
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_control/$state/set", "sleeping")
pause()
assert algo.state == "sleeping"
assert algo.automation_job.state == "sleeping"
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_control/$state/set", "ready")
pause()
assert algo.state == "ready"
assert algo.automation_job.state == "ready"
algo.clean_up()
def test_old_readings_will_not_execute_io() -> None:
experiment = "test_old_readings_will_not_execute_io"
with DosingAutomationJob(
target_growth_rate=0.05,
target_od=1.0,
duration=60,
unit=unit,
experiment=experiment,
) as algo:
algo._latest_growth_rate = 1
algo._latest_od = 1
algo.latest_od_at = datetime.utcnow() - timedelta(minutes=10)
algo.latest_growth_rate_at = datetime.utcnow() - timedelta(minutes=4)
assert algo.most_stale_time == algo.latest_od_at
assert isinstance(algo.run(), events.NoEvent)
def test_throughput_calculator() -> None:
experiment = "test_throughput_calculator"
with local_persistant_storage("media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_fraction") as c:
c[experiment] = "0.0"
algo = DosingController(
"pid_morbidostat",
target_growth_rate=0.05,
target_od=1.0,
duration=60,
unit=unit,
experiment=experiment,
)
assert algo.automation_job.media_throughput == 0
pause()
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.08, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 1.0, "timestamp": current_utc_timestamp()}),
)
pause()
algo.automation_job.run()
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.08, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.95, "timestamp": current_utc_timestamp()}),
)
pause()
algo.automation_job.run()
assert algo.automation_job.media_throughput > 0
assert algo.automation_job.alt_media_throughput > 0
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.07, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.95, "timestamp": current_utc_timestamp()}),
)
pause()
algo.automation_job.run()
assert algo.automation_job.media_throughput > 0
assert algo.automation_job.alt_media_throughput > 0
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.065, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.95, "timestamp": current_utc_timestamp()}),
)
pause()
algo.automation_job.run()
assert algo.automation_job.media_throughput > 0
assert algo.automation_job.alt_media_throughput > 0
algo.clean_up()
def test_throughput_calculator_restart() -> None:
experiment = "test_throughput_calculator_restart"
with local_persistant_storage("media_throughput") as c:
c[experiment] = str(1.0)
with local_persistant_storage("alt_media_throughput") as c:
c[experiment] = str(1.5)
with DosingController(
"turbidostat",
target_od=1.0,
duration=5 / 60,
volume=1.0,
unit=unit,
experiment=experiment,
) as algo:
pause()
assert algo.automation_job.media_throughput == 1.0
assert algo.automation_job.alt_media_throughput == 1.5
def test_throughput_calculator_manual_set() -> None:
experiment = "test_throughput_calculator_manual_set"
with local_persistant_storage("media_throughput") as c:
c[experiment] = str(1.0)
with local_persistant_storage("alt_media_throughput") as c:
c[experiment] = str(1.5)
with DosingController(
"turbidostat",
target_od=1.0,
duration=5 / 60,
volume=1.0,
unit=unit,
experiment=experiment,
) as algo:
pause()
assert algo.automation_job.media_throughput == 1.0
assert algo.automation_job.alt_media_throughput == 1.5
pubsub.publish(
f"pioreactor/{unit}/{experiment}/dosing_automation/alt_media_throughput/set",
0,
)
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_automation/media_throughput/set", 0)
pause()
pause()
assert algo.automation_job.media_throughput == 0
assert algo.automation_job.alt_media_throughput == 0
def test_execute_io_action() -> None:
experiment = "test_execute_io_action"
with local_persistant_storage("media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_throughput") as c:
c[experiment] = "0.0"
with DosingController("silent", unit=unit, experiment=experiment) as ca:
ca.automation_job.execute_io_action(media_ml=0.65, alt_media_ml=0.35, waste_ml=0.65 + 0.35)
pause()
assert ca.automation_job.media_throughput == 0.65
assert ca.automation_job.alt_media_throughput == 0.35
ca.automation_job.execute_io_action(media_ml=0.15, alt_media_ml=0.15, waste_ml=0.3)
pause()
assert ca.automation_job.media_throughput == 0.80
assert ca.automation_job.alt_media_throughput == 0.50
ca.automation_job.execute_io_action(media_ml=1.0, alt_media_ml=0, waste_ml=1)
pause()
assert ca.automation_job.media_throughput == 1.80
assert ca.automation_job.alt_media_throughput == 0.50
ca.automation_job.execute_io_action(media_ml=0.0, alt_media_ml=1.0, waste_ml=1)
pause()
assert ca.automation_job.media_throughput == 1.80
assert ca.automation_job.alt_media_throughput == 1.50
def test_execute_io_action2() -> None:
experiment = "test_execute_io_action2"
with local_persistant_storage("media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_fraction") as c:
c[experiment] = "0.0"
with DosingController("silent", unit=unit, experiment=experiment) as ca:
ca.automation_job.execute_io_action(media_ml=1.25, alt_media_ml=0.01, waste_ml=1.26)
pause()
assert ca.automation_job.media_throughput == 1.25
assert ca.automation_job.alt_media_throughput == 0.01
assert abs(ca.automation_job.alt_media_fraction - 0.0007142) < 0.000001
def test_execute_io_action_outputs1() -> None:
# regression test
experiment = "test_execute_io_action_outputs1"
with local_persistant_storage("media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_fraction") as c:
c[experiment] = "0.0"
ca = DosingAutomationJob(unit=unit, experiment=experiment)
result = ca.execute_io_action(media_ml=1.25, alt_media_ml=0.01, waste_ml=1.26)
assert result[0] == 1.25
assert result[1] == 0.01
assert result[2] == 1.26
ca.clean_up()
def test_execute_io_action_outputs_will_be_null_if_calibration_is_not_defined() -> None:
# regression test
experiment = "test_execute_io_action_outputs_will_be_null_if_calibration_is_not_defined"
with local_persistant_storage("media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_fraction") as c:
c[experiment] = "0.0"
with local_persistant_storage("pump_calibration") as cache:
del cache["media_ml_calibration"]
del cache["alt_media_ml_calibration"]
with pytest.raises(exc.CalibrationError):
with DosingAutomationJob(unit=unit, experiment=experiment, skip_first_run=True) as ca:
ca.execute_io_action(media_ml=0.1, alt_media_ml=0.1, waste_ml=0.2)
# add back to cache
with local_persistant_storage("pump_calibration") as cache:
cache["media_ml_calibration"] = json.dumps({"duration_": 1.0})
cache["alt_media_ml_calibration"] = json.dumps({"duration_": 1.0})
def test_execute_io_action_outputs_will_shortcut_if_disconnected() -> None:
# regression test
experiment = "test_execute_io_action_outputs_will_shortcut_if_disconnected"
with local_persistant_storage("media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_fraction") as c:
c[experiment] = "0.0"
ca = DosingAutomationJob(unit=unit, experiment=experiment)
ca.clean_up()
result = ca.execute_io_action(media_ml=1.25, alt_media_ml=0.01, waste_ml=1.26)
assert result[0] == 0.0
assert result[1] == 0.0
assert result[2] == 0.0
def test_PIDMorbidostat() -> None:
experiment = "test_PIDMorbidostat"
algo = PIDMorbidostat(
target_od=1.0,
target_growth_rate=0.01,
duration=5 / 60,
unit=unit,
experiment=experiment,
)
assert algo.latest_event is None
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.08, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.5, "timestamp": current_utc_timestamp()}),
)
time.sleep(10)
pause()
assert isinstance(algo.latest_event, events.NoEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.08, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.95, "timestamp": current_utc_timestamp()}),
)
time.sleep(20)
pause()
assert isinstance(algo.latest_event, events.AddAltMediaEvent)
algo.clean_up()
def test_changing_duration_over_mqtt() -> None:
experiment = "test_changing_duration_over_mqtt"
with PIDMorbidostat(
target_od=1.0,
target_growth_rate=0.01,
duration=5 / 60,
unit=unit,
experiment=experiment,
) as algo:
assert algo.latest_event is None
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.08, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.5, "timestamp": current_utc_timestamp()}),
)
time.sleep(10)
assert isinstance(algo.latest_event, events.NoEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/dosing_automation/duration/set",
1, # in minutes
)
time.sleep(10)
assert algo.run_thread.interval == 60 # in seconds
def test_changing_duration_over_mqtt_will_start_next_run_earlier() -> None:
experiment = "test_changing_duration_over_mqtt_will_start_next_run_earlier"
with PIDMorbidostat(
target_od=1.0,
target_growth_rate=0.01,
duration=10 / 60,
unit=unit,
experiment=experiment,
) as algo:
assert algo.latest_event is None
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 0.08, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 0.5, "timestamp": current_utc_timestamp()}),
)
time.sleep(15)
assert isinstance(algo.latest_event, events.NoEvent)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/dosing_automation/duration/set",
15 / 60, # in minutes
)
time.sleep(5)
assert algo.run_thread.interval == 15 # in seconds
assert algo.run_thread.run_after > 0
def test_changing_algo_over_mqtt_with_wrong_automation_type() -> None:
experiment = "test_changing_algo_over_mqtt_with_wrong_automation_type"
with DosingController(
"turbidostat",
target_od=1.0,
duration=5 / 60,
volume=1.0,
unit=unit,
experiment=experiment,
) as algo:
assert algo.automation.automation_name == "turbidostat"
assert isinstance(algo.automation_job, Turbidostat)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/dosing_control/automation/set",
json.dumps(
{
"automation_name": "pid_morbidostat",
"type": "led",
"args": {
"duration": 60,
"target_od": 1.0,
"target_growth_rate": 0.07,
},
}
),
)
time.sleep(8)
assert algo.automation.automation_name == "turbidostat"
def test_changing_algo_over_mqtt_solo() -> None:
experiment = "test_changing_algo_over_mqtt_solo"
with DosingController(
"turbidostat",
target_od=1.0,
duration=5 / 60,
volume=1.0,
unit=unit,
experiment=experiment,
) as algo:
assert algo.automation.automation_name == "turbidostat"
assert isinstance(algo.automation_job, Turbidostat)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/dosing_control/automation/set",
json.dumps(
{
"automation_name": "pid_morbidostat",
"type": "dosing",
"args": {
"duration": 60,
"target_od": 1.0,
"target_growth_rate": 0.07,
},
}
),
)
time.sleep(8)
assert algo.automation.automation_name == "pid_morbidostat"
assert isinstance(algo.automation_job, PIDMorbidostat)
assert algo.automation_job.target_growth_rate == 0.07
def test_changing_algo_over_mqtt_when_it_fails_will_rollback() -> None:
experiment = "test_changing_algo_over_mqtt_when_it_fails_will_rollback"
with DosingController(
"turbidostat",
target_od=1.0,
duration=5 / 60,
volume=1.0,
unit=unit,
experiment=experiment,
) as algo:
assert algo.automation.automation_name == "turbidostat"
assert isinstance(algo.automation_job, Turbidostat)
pause()
pubsub.publish(
f"pioreactor/{unit}/{experiment}/dosing_control/automation/set",
json.dumps(
{
"automation_name": "pid_morbidostat",
"args": {"duration": 60},
"type": "dosing",
}
),
)
time.sleep(10)
assert algo.automation.automation_name == "turbidostat"
assert isinstance(algo.automation_job, Turbidostat)
assert algo.automation_job.target_od == 1.0
pause()
pause()
pause()
def test_changing_algo_over_mqtt_will_not_produce_two_dosing_jobs() -> None:
experiment = "test_changing_algo_over_mqtt_will_not_produce_two_dosing_jobs"
with local_persistant_storage("media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_throughput") as c:
c[experiment] = "0.0"
with local_persistant_storage("alt_media_fraction") as c:
c[experiment] = "0.0"
algo = DosingController(
"pid_turbidostat",
volume=1.0,
target_od=0.4,
duration=60,
unit=unit,
experiment=experiment,
)
assert algo.automation.automation_name == "pid_turbidostat"
pause()
pubsub.publish(
f"pioreactor/{unit}/{experiment}/dosing_control/automation/set",
json.dumps(
{
"automation_name": "turbidostat",
"type": "dosing",
"args": {
"duration": 60,
"target_od": 1.0,
"volume": 1.0,
"skip_first_run": 1,
},
}
),
)
time.sleep(10) # need to wait for all jobs to disconnect correctly and threads to join.
assert isinstance(algo.automation_job, Turbidostat)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
json.dumps({"growth_rate": 1.0, "timestamp": current_utc_timestamp()}),
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
json.dumps({"od_filtered": 1.0, "timestamp": current_utc_timestamp()}),
)
pause()
# note that we manually run, as we have skipped the first run in the json
algo.automation_job.run()
time.sleep(5)
assert algo.automation_job.media_throughput == 1.0
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_automation/target_od/set", 1.5)
pause()
pause()
assert algo.automation_job.target_od == 1.5
algo.clean_up()
def test_changing_algo_over_mqtt_with_wrong_type_is_okay() -> None:
experiment = "test_changing_algo_over_mqtt_with_wrong_type_is_okay"
with local_persistant_storage("media_throughput") as c:
c[experiment] = "0.0"
algo = DosingController(
"pid_turbidostat",
volume=1.0,
target_od=0.4,
duration=2 / 60,
unit=unit,
experiment=experiment,
)
assert algo.automation.automation_name == "pid_turbidostat"
assert algo.automation_name == "pid_turbidostat"
pause()
pubsub.publish(
f"pioreactor/{unit}/{experiment}/dosing_control/automation/set",
json.dumps(
{
"automation_name": "pid_turbidostat",
"type": "dosing",
"args": {"duration": "60", "target_od": "1.0", "volume": "1.0"},
}
),
)
time.sleep(7) # need to wait for all jobs to disconnect correctly and threads to join.
assert isinstance(algo.automation_job, PIDTurbidostat)
assert algo.automation_job.target_od == 1.0
algo.clean_up()
def test_disconnect_cleanly() -> None:
experiment = "test_disconnect_cleanly"
algo = DosingController(
"turbidostat",
target_od=1.0,
duration=50,
unit=unit,
volume=1.0,
experiment=experiment,
)
assert algo.automation.automation_name == "turbidostat"
assert isinstance(algo.automation_job, Turbidostat)
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_control/$state/set", "disconnected")
time.sleep(10)
assert algo.state == algo.DISCONNECTED
def test_disconnect_cleanly_during_pumping_execution() -> None:
experiment = "test_disconnect_cleanly_during_pumping_execution"
algo = DosingController(
"chemostat",
volume=5.0,
duration=10,
unit=unit,
experiment=experiment,
)
assert algo.automation.automation_name == "chemostat"
time.sleep(4)
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_control/$state/set", "disconnected")
time.sleep(10)
assert algo.state == algo.DISCONNECTED
assert algo.automation_job.state == algo.DISCONNECTED
def test_custom_class_will_register_and_run() -> None:
experiment = "test_custom_class_will_register_and_run"
class NaiveTurbidostat(DosingAutomationJob):
automation_name = "naive_turbidostat"
published_settings = {
"target_od": {"datatype": "float", "settable": True, "unit": "AU"},
"duration": {"datatype": "float", "settable": True, "unit": "min"},
}
def __init__(self, target_od: float, **kwargs: Any) -> None:
super(NaiveTurbidostat, self).__init__(**kwargs)
self.target_od = target_od
def execute(self) -> None:
if self.latest_od > self.target_od:
self.execute_io_action(media_ml=1.0, waste_ml=1.0)
with DosingController(
"naive_turbidostat",
target_od=2.0,
duration=10,
unit=get_unit_name(),
experiment=experiment,
):
pass
def test_what_happens_when_no_od_data_is_coming_in() -> None:
experiment = "test_what_happens_when_no_od_data_is_coming_in"
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/growth_rate",
None,
retain=True,
)
pubsub.publish(
f"pioreactor/{unit}/{experiment}/growth_rate_calculating/od_filtered",
None,
retain=True,
)
algo = Turbidostat(
target_od=0.1, duration=40 / 60, volume=0.25, unit=unit, experiment=experiment
)
pause()
event = algo.run()
assert isinstance(event, events.ErrorOccurred)
algo.clean_up()
def test_changing_duty_cycle_over_mqtt() -> None:
experiment = "test_changing_duty_cycle_over_mqtt"
with ContinuousCycle(unit=unit, experiment=experiment) as algo:
assert algo.duty_cycle == 100
pubsub.publish(f"pioreactor/{unit}/{experiment}/dosing_automation/duty_cycle/set", 50)
pause()
assert algo.duty_cycle == 50
def test_AltMediaCalculator() -> None:
from pioreactor.structs import DosingEvent
ac = AltMediaCalculator()
data = DosingEvent(volume_change=1.0, event="add_media", timestamp="0", source_of_event="test")
assert 0.0 == ac.update(data, 0.0)
data = DosingEvent(
volume_change=1.0, event="add_alt_media", timestamp="1", source_of_event="test"
)
assert 1 / 14.0 == 0.07142857142857142 == ac.update(data, 0.0)
data = DosingEvent(
volume_change=1.0, event="add_alt_media", timestamp="2", source_of_event="test"
)
assert 0.13775510204081634 == ac.update(data, 1 / 14.0) < 2 / 14.0
def test_latest_event_goes_to_mqtt():
experiment = "test_latest_event_goes_to_mqtt"
class FakeAutomation(DosingAutomationJob):
"""
Do nothing, ever. Just pass.
"""
automation_name = "fake_automation"
published_settings = {"duration": {"datatype": "float", "settable": True, "unit": "min"}}
def __init__(self, **kwargs) -> None:
super(FakeAutomation, self).__init__(**kwargs)
def execute(self):
return events.NoEvent(message="demo", data={"d": 1.0, "s": "test"})
with DosingController(
"fake_automation",
duration=0.1,
unit=get_unit_name(),
experiment=experiment,
) as dc:
assert "latest_event" in dc.automation_job.published_settings
latest_event_from_mqtt = json.loads(
pubsub.subscribe(
f"pioreactor/{unit}/{experiment}/dosing_automation/latest_event"
).payload
)
assert latest_event_from_mqtt["event_name"] == "NoEvent"
assert latest_event_from_mqtt["message"] == "demo"
assert latest_event_from_mqtt["data"]["d"] == 1.0
assert latest_event_from_mqtt["data"]["s"] == "test"
| 35.176157 | 100 | 0.652866 | 4,625 | 39,538 | 5.311135 | 0.063351 | 0.058622 | 0.051295 | 0.085491 | 0.847785 | 0.797386 | 0.760178 | 0.741003 | 0.693983 | 0.67257 | 0 | 0.024931 | 0.225935 | 39,538 | 1,123 | 101 | 35.20748 | 0.777683 | 0.012697 | 0 | 0.624473 | 0 | 0 | 0.25034 | 0.173979 | 0 | 0 | 0 | 0 | 0.118143 | 1 | 0.042194 | false | 0.001055 | 0.024262 | 0.001055 | 0.07384 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e0ce8259a65f15dc2e6c1809091538d98ffe272a | 2,147 | py | Python | dictknife/tests/test_operators.py | podhmo/dictknife | a172220c1adc8411b69f31646ea2154932d71516 | [
"MIT"
] | 13 | 2018-11-23T15:55:18.000Z | 2021-11-24T02:42:44.000Z | dictknife/tests/test_operators.py | podhmo/dictknife | a172220c1adc8411b69f31646ea2154932d71516 | [
"MIT"
] | 105 | 2017-01-09T02:05:48.000Z | 2021-07-26T03:39:22.000Z | dictknife/tests/test_operators.py | podhmo/dictknife | a172220c1adc8411b69f31646ea2154932d71516 | [
"MIT"
] | 4 | 2017-07-19T12:34:47.000Z | 2019-06-20T10:32:13.000Z | import unittest
from collections import namedtuple
class OperatorsTests(unittest.TestCase):
def _callFUT(self, op, value):
from dictknife.operators import apply
return apply(op, value)
def test_it(self):
C = namedtuple("C", "value, expected")
candidates = [
C(value="x", expected=False),
C(value="xx", expected=True),
C(value="xxx", expected=False),
]
op = "xx"
for c in candidates:
with self.subTest(op=op, value=c.value):
actual = self._callFUT(op, c.value)
self.assertEqual(actual, c.expected)
def test_and(self):
from ..operators import And
C = namedtuple("C", "value, expected")
candidates = [
C(value="x", expected=False),
C(value="xx", expected=False),
C(value="xxx", expected=False),
]
op = And(["x", "xx", "xxx"])
for c in candidates:
with self.subTest(op=op, value=c.value):
actual = self._callFUT(op, c.value)
self.assertEqual(actual, c.expected)
def test_or(self):
from ..operators import Or
C = namedtuple("C", "value, expected")
candidates = [
C(value="x", expected=True),
C(value="xx", expected=True),
C(value="xxx", expected=True),
]
op = Or(["x", "xx", "xxx"])
for c in candidates:
with self.subTest(op=op, value=c.value):
actual = self._callFUT(op, c.value)
self.assertEqual(actual, c.expected)
def test_and2(self):
from ..operators import And, Not
C = namedtuple("C", "value, expected")
candidates = [
C(value="x", expected=False),
C(value="xx", expected=True),
C(value="xxx", expected=False),
]
op = And([Not("x"), "xx", Not("xxx")])
for c in candidates:
with self.subTest(op=op, value=c.value):
actual = self._callFUT(op, c.value)
self.assertEqual(actual, c.expected)
| 28.25 | 52 | 0.520261 | 248 | 2,147 | 4.467742 | 0.149194 | 0.129964 | 0.043321 | 0.061372 | 0.792419 | 0.745487 | 0.745487 | 0.718412 | 0.718412 | 0.685018 | 0 | 0.000705 | 0.339078 | 2,147 | 75 | 53 | 28.626667 | 0.780127 | 0 | 0 | 0.578947 | 0 | 0 | 0.050303 | 0 | 0 | 0 | 0 | 0 | 0.070175 | 1 | 0.087719 | false | 0 | 0.105263 | 0 | 0.22807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e0ea28797ee37248afb2585461751925f98123e6 | 89 | py | Python | lib/exceptions.py | deep-learning-20/d2-net | b092186353af23e9247c7f56ac2de3396b8c5a00 | [
"BSD-3-Clause-Clear"
] | 603 | 2019-04-18T12:37:16.000Z | 2022-03-26T23:51:35.000Z | lib/exceptions.py | kinalmehta/d2-net | f0d63609730b06e064c037256e0e40bac5b5ca43 | [
"BSD-3-Clause-Clear"
] | 91 | 2019-04-29T19:02:58.000Z | 2022-03-23T19:40:14.000Z | lib/exceptions.py | kinalmehta/d2-net | f0d63609730b06e064c037256e0e40bac5b5ca43 | [
"BSD-3-Clause-Clear"
] | 153 | 2019-05-02T16:19:08.000Z | 2022-03-02T19:16:02.000Z | class EmptyTensorError(Exception):
pass
class NoGradientError(Exception):
pass
| 12.714286 | 34 | 0.752809 | 8 | 89 | 8.375 | 0.625 | 0.38806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179775 | 89 | 6 | 35 | 14.833333 | 0.917808 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
1cbd9a2185c15e6bc77f220e7f4bb73e5a69e6d3 | 77 | py | Python | numba/tests/error_usecases.py | mawanda-jun/numba | 8c6658375c1f8fe50e1a5ccd11d4e7bf5a8053de | [
"BSD-2-Clause",
"Apache-2.0"
] | 1,738 | 2017-09-21T10:59:12.000Z | 2022-03-31T21:05:46.000Z | dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/numba/tests/error_usecases.py | olivier-be/lumberyard | 3d688932f919dbf5821f0cb8a210ce24abe39e9e | [
"AML"
] | 427 | 2017-09-29T22:54:36.000Z | 2022-02-15T19:26:50.000Z | dev/Gems/CloudGemMetric/v1/AWS/python/windows/Lib/numba/tests/error_usecases.py | olivier-be/lumberyard | 3d688932f919dbf5821f0cb8a210ce24abe39e9e | [
"AML"
] | 671 | 2017-09-21T08:04:01.000Z | 2022-03-29T14:30:07.000Z | import numba as nb
@nb.jit(nopython=True, parallel=True)
def foo():
pass
| 15.4 | 37 | 0.701299 | 13 | 77 | 4.153846 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 77 | 4 | 38 | 19.25 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0.25 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
1cda3cc8e27e7cdf267ecb53e7c56b9a24abecc8 | 311 | py | Python | fairseq/models/delta/__init__.py | xiaodashuaiya/fairseq-with-delta | 53b01ecc841de31649e6e937efc2ac91606e93f4 | [
"MIT"
] | null | null | null | fairseq/models/delta/__init__.py | xiaodashuaiya/fairseq-with-delta | 53b01ecc841de31649e6e937efc2ac91606e93f4 | [
"MIT"
] | null | null | null | fairseq/models/delta/__init__.py | xiaodashuaiya/fairseq-with-delta | 53b01ecc841de31649e6e937efc2ac91606e93f4 | [
"MIT"
] | null | null | null | from fairseq.models.delta import two_delta
from fairseq.models.delta import square_delta
from fairseq.models.delta import square_delta_fillpre
from fairseq.models.delta import square_delta_dif5
__all__ = ['two_delta',
'square_delta',
'square_delta_dif5',
'square_delta_fillpre',]
| 31.1 | 53 | 0.755627 | 41 | 311 | 5.341463 | 0.243902 | 0.30137 | 0.310502 | 0.401826 | 0.684932 | 0.557078 | 0.557078 | 0.378995 | 0 | 0 | 0 | 0.007752 | 0.170418 | 311 | 9 | 54 | 34.555556 | 0.841085 | 0 | 0 | 0 | 0 | 0 | 0.186495 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
1ce2ae73df55e80b65e23b3b0f8e3ab19c3dcb43 | 5,990 | py | Python | test.py | ZXin0305/hri | b91d89158fc2d05ca4d3ea3ba4a7b9f69b0221a2 | [
"Apache-2.0"
] | null | null | null | test.py | ZXin0305/hri | b91d89158fc2d05ca4d3ea3ba4a7b9f69b0221a2 | [
"Apache-2.0"
] | null | null | null | test.py | ZXin0305/hri | b91d89158fc2d05ca4d3ea3ba4a7b9f69b0221a2 | [
"Apache-2.0"
] | null | null | null | import numpy as np
from IPython import embed
from time import time
import torch
import math
import random
# xx = np.zeros(shape=(2,15,4), dtype=np.float)
# xx[0, 2, 2] = 1
# xx[1, 2, 2] = 0.5
# yy = xx[:,2,2].argsort()
# xx = xx[yy]
# embed()
# xx = np.ones(shape=(75,45))
# xx = xx.tolist()
# # xx.pop(0,20)
# del xx[0:20]
# embed()
# xx = {'0':0,'1':0,'2':0,'3':0,'4':0,'5':0,'6':0,'7':0,'8':0,'9':0,'10':0,'11':0,'12':0,'13':0,'14':0,'15':0,'16':0,
# '17':0,'18':0,'19':0,'20':0,'21':0,'22':0,'23':0,'24':0,'25':0,'26':0,'27':0,'28':0,'29':0,'30':0,'31':0,}
# st = time()
# if "0" in xx.keys():
# et = time()
# print(f"total {(et - st)}")
# def change_pose(pred_3d_bodys):
# """[summary]
# Args:
# pred_3d_bodys ([type]): [description]
# not original
# Returns:
# [type]: [description]
# """
# pose_3d = []
# for i in range(0,1): # 默认都是1个人
# for j in range(15):
# pose_3d.append(pred_3d_bodys[i][j][0]) # x
# pose_3d.append(pred_3d_bodys[i][j][1]) # y
# pose_3d.append(pred_3d_bodys[i][j][2]) # z
# return pose_3d
# xx = np.eye(3)
# yy = np.random.rand(1, 15,3)
# yy =yy.transpose(0,2,1)
# zz = xx @ yy
# zz[0,1] += 1
# zz[0,2] += 1
# embed()
# a = [1,2,3]
# b = [1,2,3]
# c = max(b)
# print(c)
# a = [[1,2,3],[1,2,3]]
# a = np.array(a)
# b = [[1,5,3],[0,0,0]]
# b = np.array(b)
# c = np.array([1,2,3])
# # print(sum(c))
# print(c)
# print(c.argmax(0))
# a = torch.tensor([1,2,3])
# a = 0
# xx = (1 / math.sqrt(2 * math.pi)) * math.exp((-1 / 2) * 0.13)
# xx = math.exp((-1 / 2) * 0.13)
# pri
# xx = random.randrange(30,54)
# print(xx)
# xx = np.array([[1,2,3],[1,2,3]])
# yy = np.delete(xx[:,:],1)
# embed()
# xx = np.array([[ -91.24533081, -9.77925491, 267.06481934, 1. ],
# [ -82.04265594, -31.73023224, 271.43804932, 1. ],
# [ -89.02472687, 40.83181763, 284.30203247, 1. ],
# [-104.65914917, -12.89662933, 276.2901001 , 1. ],
# [-109.66155243, 9.82787323, 289.22158813, 1. ],
# [ -85.39533997, 6.52565002, 293.66082764, 1. ],
# [ -97.42415619, 39.70267487, 290.0920105 , 1. ],
# [-100.76938629, 71.71859741, 304.46191406, 1. ],
# [-110.30347443, 106.38193512, 312.63577271, 1. ],
# [ -77.77825165, -7.06853485, 257.95681763, 1. ],
# [ -70.11280823, 17.82071495, 265.47302246, 1. ],
# [ -69.68502808, 12.14739037, 288.32354736, 1. ],
# [ -80.57032013, 41.87945175, 278.51196289, 1. ],
# [ -77.47626495, 75.3965683 , 291.89306641, 1. ],
# [ -78.98562622, 109.38594055, 302.17233276, 1. ]])
# yy = np.array([[ 195.16946411, 240.30853271, 270.02520752, 2. ,
# -97.80656433, -8.6984005 , 270.02520752, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 222.62481689, 187.98979187, 273.14260864, 2. ,
# -85.97328186, -32.63896561, 273.14260864, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 213.34616089, 348.8364563 , 293.92376709, 2. ,
# -97.53807831, 44.09518433, 293.92376709, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 173.70863342, 233.23698425, 280.52893066, 2. ,
# -112.50686646, -12.4541378 , 280.52893066, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 176.00830078, 279.80993652, 292.9100647 , 2. ,
# -116.22911072, 10.05602646, 292.9100647 , 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 216.99163818, 286.08230591, 300.94900513, 2. ,
# -97.40164948, 13.34723759, 300.94900513, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 197.83638 , 343.52972412, 299.24221802, 2. ,
# -107.50037384, 42.39523315, 299.24221802, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 207.59408569, 397.00366211, 315.77627563, 2. ,
# -108.87758636, 73.62127686, 315.77627563, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 202.59803772, 455.28015137, 322.55981445, 2. ,
# -115.69550323, 108.71553802, 322.55981445, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 215.68159485, 247.08103943, 260.17868042, 2. ,
# -84.7667923 , -5.39123869, 260.17868042, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 234.49308777, 303.28533936, 262.46777344, 2. ,
# -77.00579834, 19.09913254, 262.46777344, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 250.81388855, 287.89248657, 284.92578125, 2. ,
# -75.51580811, 13.37642765, 284.92578125, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 229.62341309, 354.34918213, 288.60528564, 2. ,
# -87.57569885, 45.7951622 , 288.60528564, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 252.86502075, 412.93890381, 304.94400024, 2. ,
# -81.10929108, 78.66136169, 304.94400024, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904],
# [ 265.61761475, 467.32778931, 317.81060791, 2. ,
# -78.63576508, 112.31228638, 317.81060791, 1427.33996582,
# 1423.13000488, 949.61798096, 548.13201904]])
# xx = xx[:,:3]
# yy = yy[:,4:7]
# error = np.linalg.norm(np.abs(xx - yy), axis=1)
# embed()
xx = torch.tensor([[1,2,3],
[2,1,1]])
yy = torch.tensor([[0,0,0],
[0,0,0]])
embed() | 37.4375 | 117 | 0.516861 | 803 | 5,990 | 3.835616 | 0.404732 | 0.058442 | 0.077922 | 0.116883 | 0.276623 | 0.260065 | 0.248377 | 0.248377 | 0 | 0 | 0 | 0.548214 | 0.284975 | 5,990 | 160 | 118 | 37.4375 | 0.170908 | 0.915192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.545455 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
1ce4af6a4757684e33839827201ab8b5ca5e86bd | 9,219 | py | Python | filter_plugins/oc_output/interfaces/interface/subinterfaces/subinterface/vlan/match/single_tagged_range/state/__init__.py | lnde/ansible-ncyang | 214d001564a4c2a27d25a20f4f095b5a0b69b378 | [
"MIT"
] | null | null | null | filter_plugins/oc_output/interfaces/interface/subinterfaces/subinterface/vlan/match/single_tagged_range/state/__init__.py | lnde/ansible-ncyang | 214d001564a4c2a27d25a20f4f095b5a0b69b378 | [
"MIT"
] | null | null | null | filter_plugins/oc_output/interfaces/interface/subinterfaces/subinterface/vlan/match/single_tagged_range/state/__init__.py | lnde/ansible-ncyang | 214d001564a4c2a27d25a20f4f095b5a0b69b378 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from operator import attrgetter
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType
from pyangbind.lib.yangtypes import RestrictedClassType
from pyangbind.lib.yangtypes import TypedListType
from pyangbind.lib.yangtypes import YANGBool
from pyangbind.lib.yangtypes import YANGListType
from pyangbind.lib.yangtypes import YANGDynClass
from pyangbind.lib.yangtypes import ReferenceType
from pyangbind.lib.base import PybindBase
from collections import OrderedDict
from decimal import Decimal
from bitarray import bitarray
import six
# PY3 support of some PY2 keywords (needs improved)
if six.PY3:
import builtins as __builtin__
long = int
elif six.PY2:
import __builtin__
class state(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-interfaces - based on the path /interfaces/interface/subinterfaces/subinterface/vlan/match/single-tagged-range/state. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: State for matching single-tagged packets with a range of VLAN
identifiers.
"""
__slots__ = ('_path_helper', '_extmethods', '__low_vlan_id','__high_vlan_id',)
_yang_name = 'state'
_yang_namespace = 'http://openconfig.net/yang/interfaces'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__low_vlan_id = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['1..4094']}), is_leaf=True, yang_name="low-vlan-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/vlan', defining_module='openconfig-vlan', yang_type='oc-vlan-types:vlan-id', is_config=False)
self.__high_vlan_id = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['1..4094']}), is_leaf=True, yang_name="high-vlan-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/vlan', defining_module='openconfig-vlan', yang_type='oc-vlan-types:vlan-id', is_config=False)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return ['interfaces', 'interface', 'subinterfaces', 'subinterface', 'vlan', 'match', 'single-tagged-range', 'state']
def _get_low_vlan_id(self):
"""
Getter method for low_vlan_id, mapped from YANG variable /interfaces/interface/subinterfaces/subinterface/vlan/match/single_tagged_range/state/low_vlan_id (oc-vlan-types:vlan-id)
YANG Description: The low-value VLAN identifier in a range for single-tagged
packets. The range is matched inclusively.
"""
return self.__low_vlan_id
def _set_low_vlan_id(self, v, load=False):
"""
Setter method for low_vlan_id, mapped from YANG variable /interfaces/interface/subinterfaces/subinterface/vlan/match/single_tagged_range/state/low_vlan_id (oc-vlan-types:vlan-id)
If this variable is read-only (config: false) in the
source YANG file, then _set_low_vlan_id is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_low_vlan_id() directly.
YANG Description: The low-value VLAN identifier in a range for single-tagged
packets. The range is matched inclusively.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['1..4094']}), is_leaf=True, yang_name="low-vlan-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/vlan', defining_module='openconfig-vlan', yang_type='oc-vlan-types:vlan-id', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """low_vlan_id must be of a type compatible with oc-vlan-types:vlan-id""",
'defined-type': "oc-vlan-types:vlan-id",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['1..4094']}), is_leaf=True, yang_name="low-vlan-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/vlan', defining_module='openconfig-vlan', yang_type='oc-vlan-types:vlan-id', is_config=False)""",
})
self.__low_vlan_id = t
if hasattr(self, '_set'):
self._set()
def _unset_low_vlan_id(self):
self.__low_vlan_id = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['1..4094']}), is_leaf=True, yang_name="low-vlan-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/vlan', defining_module='openconfig-vlan', yang_type='oc-vlan-types:vlan-id', is_config=False)
def _get_high_vlan_id(self):
"""
Getter method for high_vlan_id, mapped from YANG variable /interfaces/interface/subinterfaces/subinterface/vlan/match/single_tagged_range/state/high_vlan_id (oc-vlan-types:vlan-id)
YANG Description: The high-value VLAN identifier in a range for single-tagged
packets. The range is matched inclusively.
"""
return self.__high_vlan_id
def _set_high_vlan_id(self, v, load=False):
"""
Setter method for high_vlan_id, mapped from YANG variable /interfaces/interface/subinterfaces/subinterface/vlan/match/single_tagged_range/state/high_vlan_id (oc-vlan-types:vlan-id)
If this variable is read-only (config: false) in the
source YANG file, then _set_high_vlan_id is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_high_vlan_id() directly.
YANG Description: The high-value VLAN identifier in a range for single-tagged
packets. The range is matched inclusively.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['1..4094']}), is_leaf=True, yang_name="high-vlan-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/vlan', defining_module='openconfig-vlan', yang_type='oc-vlan-types:vlan-id', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """high_vlan_id must be of a type compatible with oc-vlan-types:vlan-id""",
'defined-type': "oc-vlan-types:vlan-id",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['1..4094']}), is_leaf=True, yang_name="high-vlan-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/vlan', defining_module='openconfig-vlan', yang_type='oc-vlan-types:vlan-id', is_config=False)""",
})
self.__high_vlan_id = t
if hasattr(self, '_set'):
self._set()
def _unset_high_vlan_id(self):
self.__high_vlan_id = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['1..4094']}), is_leaf=True, yang_name="high-vlan-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/vlan', defining_module='openconfig-vlan', yang_type='oc-vlan-types:vlan-id', is_config=False)
low_vlan_id = __builtin__.property(_get_low_vlan_id)
high_vlan_id = __builtin__.property(_get_high_vlan_id)
_pyangbind_elements = OrderedDict([('low_vlan_id', low_vlan_id), ('high_vlan_id', high_vlan_id), ])
| 57.981132 | 480 | 0.739885 | 1,292 | 9,219 | 5.0387 | 0.145511 | 0.057143 | 0.031797 | 0.036866 | 0.798157 | 0.74424 | 0.717972 | 0.717972 | 0.717972 | 0.709985 | 0 | 0.014368 | 0.131793 | 9,219 | 158 | 481 | 58.348101 | 0.798976 | 0.243844 | 0 | 0.282609 | 0 | 0.021739 | 0.302812 | 0.120115 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.163043 | 0 | 0.380435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e8001615d3dbc929e90502f93666f538121184a5 | 10,588 | py | Python | coord_transforms_test.py | icbicket/CLFields | eca760cde80a1256e4f1b89ca227a54184d87c80 | [
"BSD-3-Clause"
] | null | null | null | coord_transforms_test.py | icbicket/CLFields | eca760cde80a1256e4f1b89ca227a54184d87c80 | [
"BSD-3-Clause"
] | null | null | null | coord_transforms_test.py | icbicket/CLFields | eca760cde80a1256e4f1b89ca227a54184d87c80 | [
"BSD-3-Clause"
] | null | null | null | import coord_transforms
import unittest
import numpy as np
class QuadrantSymmetryTest(unittest.TestCase):
'''
Check expand_quadrant_symmetry is behaving as expected
Simple 2x2 array
3x3 array (odd dimensions)
3x4 (different length dimensions)
'''
def testQuadrant2x2ArrayQ1(self):
'''
2x2 array symmetrizes properly
input quadrant 1
'''
array = np.array([[0, 1], [2, 3]])
arrayfull = np.array([[0, 1, 0], [2, 3, 2], [0, 1, 0]])
testarray = coord_transforms.expand_quadrant_symmetry(array, 1)
np.testing.assert_array_almost_equal(arrayfull, testarray)
def testQuadrant2x2ArrayQ2(self):
'''
2x2 array symmetrizes properly
input quadrant 2
'''
array = np.array([[0, 1], [2, 3]])
arrayfull = np.array([[1, 0, 1], [3, 2, 3], [1, 0, 1]])
testarray = coord_transforms.expand_quadrant_symmetry(array, 2)
np.testing.assert_array_almost_equal(arrayfull, testarray)
def testQuadrant2x2ArrayQ3(self):
'''
2x2 array symmetrizes properly
input quadrant 3
'''
array = np.array([[0, 1], [2, 3]])
arrayfull = np.array([[2, 3, 2], [0, 1, 0], [2, 3, 2]])
testarray = coord_transforms.expand_quadrant_symmetry(array, 3)
np.testing.assert_array_almost_equal(arrayfull, testarray)
def testQuadrant2x2ArrayQ4(self):
'''
2x2 array symmetrizes properly
input quadrant 4
'''
array = np.array([[0, 1], [2, 3]])
arrayfull = np.array([[3, 2, 3], [1, 0, 1], [3, 2, 3]])
testarray = coord_transforms.expand_quadrant_symmetry(array, 4)
np.testing.assert_array_almost_equal(arrayfull, testarray)
def testQuadrant3x3ArrayQ1(self):
'''
3x3 array symmetrizes properly
input quadrant 1
'''
array = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
arrayfull = np.array([
[0, 1, 2, 1, 0],
[3, 4, 5, 4, 3],
[6, 7, 8, 7, 6],
[3, 4, 5, 4, 3],
[0, 1, 2, 1, 0]])
testarray = coord_transforms.expand_quadrant_symmetry(array, 1)
np.testing.assert_array_almost_equal(arrayfull, testarray)
def testQuadrant3x3ArrayQ2(self):
'''
3x3 array symmetrizes properly
input quadrant 2
'''
array = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
arrayfull = np.array([
[2, 1, 0, 1, 2],
[5, 4, 3, 4, 5],
[8, 7, 6, 7, 8],
[5, 4, 3, 4, 5],
[2, 1, 0, 1, 2]])
testarray = coord_transforms.expand_quadrant_symmetry(array, 2)
np.testing.assert_array_almost_equal(arrayfull, testarray)
def testQuadrant3x3ArrayQ3(self):
'''
3x3 array symmetrizes properly
input quadrant 3
'''
array = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
arrayfull = np.array([
[6, 7, 8, 7, 6],
[3, 4, 5, 4, 3],
[0, 1, 2, 1, 0],
[3, 4, 5, 4, 3],
[6, 7, 8, 7, 6]])
testarray = coord_transforms.expand_quadrant_symmetry(array, 3)
np.testing.assert_array_almost_equal(arrayfull, testarray)
def testQuadrant3x3ArrayQ4(self):
'''
3x3 array symmetrizes properly
input quadrant 4
'''
array = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
arrayfull = np.array([
[8, 7, 6, 7, 8],
[5, 4, 3, 4, 5],
[2, 1, 0, 1, 2],
[5, 4, 3, 4, 5],
[8, 7, 6, 7, 8]])
testarray = coord_transforms.expand_quadrant_symmetry(array, 4)
np.testing.assert_array_almost_equal(arrayfull, testarray)
class CartesianSphericalCoordinateTransformTest(unittest.TestCase):
'''
(x, y, z) = (0, 1, 0)
'''
def test010(self):
'''
single element xyz vector
'''
xyz = np.array([[0, 1, 0]])
r, theta, phi = coord_transforms.cartesian_to_spherical_coords(xyz)
rthph = np.array([[1], [np.pi/2], [np.pi/2]])
np.testing.assert_array_almost_equal(np.array([r, theta, phi]), rthph)
def testmulti(self):
'''
multi-element xyz vectors
'''
xyz = np.array([
[1, 1, 0],
[1, -1, 0],
[-1, 1, 0],
[0, 1, 1],
[0, 1, -1],
[0, -1, 1],
[0, -1, -1]
])
r, theta, phi = coord_transforms.cartesian_to_spherical_coords(xyz)
rthph = np.array([
[np.sqrt(2), np.pi/2, np.pi/4],
[np.sqrt(2), np.pi/2, 7*np.pi/4],
[np.sqrt(2), np.pi/2, 3*np.pi/4],
[np.sqrt(2), np.pi/4, np.pi/2],
[np.sqrt(2), 3*np.pi/4, np.pi/2],
[np.sqrt(2), np.pi/4, 3*np.pi/2],
[np.sqrt(2), 3*np.pi/4, 3*np.pi/2]
]);
np.testing.assert_array_almost_equal(np.transpose(np.array([r, theta, phi])), rthph)
class RotateVectorTest(unittest.TestCase):
def test010rot001by90(self):
'''
take (0,1,0) and rotate around (0,0,1) by pi/2
'''
xyz = np.array([0, 1, 0])
angle = np.pi/2
rotation_vector = np.array([0,0,1])
rotated_vector = coord_transforms.rotate_vector(xyz, angle, rotation_vector)
np.testing.assert_array_almost_equal(np.array([-1, 0, 0]), rotated_vector)
def test001rot001by90(self):
'''
take (0,0,1) and rotate around (0,0,1) by pi/2
'''
xyz = np.array([0, 0, 1])
angle = np.pi/2
rotation_vector = np.array([0,0,1])
rotated_vector = coord_transforms.rotate_vector(xyz, angle, rotation_vector)
np.testing.assert_array_almost_equal(np.array([0, 0, 1]), rotated_vector)
def test000rotError(self):
'''
take (0,0,0) and throw an error
'''
xyz = np.array([0, 0, 0])
angle = np.pi/2
rotation_vector = np.array([0,0,1])
self.assertRaises(ValueError, coord_transforms.rotate_vector, xyz, angle, rotation_vector)
class RotateNdVectorTest(unittest.TestCase):
def testAxisVectorsRotateBy90(self):
'''
take (0,1,0), (1, 0, 0) and (0,0,1) and rotate around (0,0,1) by pi/2
'''
xyz = np.array([[0, 1, 0], [1, 0, 0], [0, 0, 1]])
angle = np.pi/2
rotation_vector = np.array([0,0,1])
rotated_vector = coord_transforms.rotate_vector_Nd(xyz, angle, rotation_vector)
np.testing.assert_array_almost_equal(np.array([[-1, 0, 0],[0, 1, 0],[0, 0, 1]]), rotated_vector)
def testFloatsVectorsRotateBy90(self):
'''
take (0.707, 0.707, 0), (1, 1, 1) and rotate around (0,0,1) by pi/2
'''
xyz = np.array([[1/np.sqrt(2), 1/np.sqrt(2), 0],[1.1, 1.1, 1.1]])
angle = np.pi/2
rotation_vector = np.array([0,0,1])
rotated_vector = coord_transforms.rotate_vector_Nd(xyz, angle, rotation_vector)
np.testing.assert_array_almost_equal(np.array([[-1/np.sqrt(2), 1/np.sqrt(2), 0], [-1.1, 1.1, 1.1]]), rotated_vector)
def testNegativeAngle(self):
'''
rotating a vector by -pi/2 vs rotating by pi/2
'''
xyz = np.array([[1/np.sqrt(2), 1/np.sqrt(2), 0],[1.1, 1.1, 1.1]])
angle = np.pi/2
rotation_vector = np.array([0,0,1])
rotated_vector = coord_transforms.rotate_vector_Nd(xyz, angle, rotation_vector)
rotated_vector_negative = coord_transforms.rotate_vector_Nd(xyz, -angle, rotation_vector)
np.testing.assert_array_almost_equal(rotated_vector_negative, np.array([[-1, -1, 1]])*rotated_vector)
def testAnglesGreater2Pi(self):
'''
rotating a vector by an angle greater than 2pi
'''
xyz = np.array([[1/np.sqrt(2), 1/np.sqrt(2), 0],[1.1, 1.1, 1.1]])
angle = np.pi/2+2*np.pi
rotation_vector = np.array([0,0,1])
rotated_vector = coord_transforms.rotate_vector_Nd(xyz, angle, rotation_vector)
np.testing.assert_array_almost_equal(np.array([[-1/np.sqrt(2), 1/np.sqrt(2), 0], [-1.1, 1.1, 1.1]]), rotated_vector)
def testAnglesMultiplesOfPi(self):
'''
rotating a vector by an angle that is a multiple of pi
'''
xyz = np.array([[1/np.sqrt(2), 1/np.sqrt(2), 0],[1.1, 1.1, 1.1]])
angle = np.pi
rotation_vector = np.array([0,0,1])
rotated_vector = coord_transforms.rotate_vector_Nd(xyz, angle, rotation_vector)
np.testing.assert_array_almost_equal(np.array([-1, -1, 1])*xyz, rotated_vector)
def testRotationAxisHighMagnitude(self):
'''
the input rotation axis has a magnitude that is not 1
'''
xyz = np.array([[1/np.sqrt(2), 1/np.sqrt(2), 0],[1.1, 1.1, 1.1]])
angle = np.pi
rotation_vector = np.array([0,0,3])
rotated_vector = coord_transforms.rotate_vector_Nd(xyz, angle, rotation_vector)
np.testing.assert_array_almost_equal(np.array([-1, -1, 1])*xyz, rotated_vector)
def testTiltedRotationAxis(self):
'''
the rotation axis is tilted off one of the main axes
'''
xyz = np.array([[0, 0, 1]])
angle = np.pi/2
rotation_vector = np.array([1, 1, 0])
rotated_vector = coord_transforms.rotate_vector_Nd(xyz, angle, rotation_vector)
np.testing.assert_array_almost_equal(np.array([[1/np.sqrt(2), -1/np.sqrt(2), 0]]), rotated_vector)
def testNegativeRotationAxis(self):
'''
rotating around the rotation axis vs the negative rotation axis should rotate in opposite directions
'''
xyz = np.array([[0, 0, 1]])
angle = np.pi/2
rotation_vector = np.array([1, 1, 0])
rotated_vector = coord_transforms.rotate_vector_Nd(xyz, angle, rotation_vector)
rotated_vector_negative = coord_transforms.rotate_vector_Nd(xyz, angle, -rotation_vector)
np.testing.assert_array_almost_equal(rotated_vector_negative, -rotated_vector)
if __name__ == '__main__':
unittest.main()
| 39.804511 | 124 | 0.539856 | 1,411 | 10,588 | 3.915663 | 0.092842 | 0.022805 | 0.017376 | 0.015204 | 0.776652 | 0.763258 | 0.737195 | 0.714027 | 0.700633 | 0.687059 | 0 | 0.07669 | 0.314035 | 10,588 | 265 | 125 | 39.954717 | 0.684015 | 0.114375 | 0 | 0.537037 | 0 | 0 | 0.00091 | 0 | 0 | 0 | 0 | 0 | 0.12963 | 1 | 0.12963 | false | 0 | 0.018519 | 0 | 0.17284 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e806e8732fe7e5826bb76c40c268c536c00c12f5 | 221 | py | Python | News/admin.py | clowdcap/mysite | 2fd6a2f69cfc58ef012138340ae86d8896bff647 | [
"MIT"
] | null | null | null | News/admin.py | clowdcap/mysite | 2fd6a2f69cfc58ef012138340ae86d8896bff647 | [
"MIT"
] | null | null | null | News/admin.py | clowdcap/mysite | 2fd6a2f69cfc58ef012138340ae86d8896bff647 | [
"MIT"
] | null | null | null | from django.contrib import admin
from . models import News, SportNews, DataRegistro, Newsletter
admin.site.register(News)
admin.site.register(SportNews)
admin.site.register(DataRegistro)
admin.site.register(Newsletter)
| 24.555556 | 62 | 0.823529 | 28 | 221 | 6.5 | 0.428571 | 0.197802 | 0.373626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081448 | 221 | 8 | 63 | 27.625 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.