hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3e23c45228504052e3278477b6579ec6d665b2be | 85 | py | Python | users/views/__init__.py | andywar65/rp_repo | 726c1426d738b962cabeabd8995aa35767df0c41 | [
"BSD-2-Clause"
] | null | null | null | users/views/__init__.py | andywar65/rp_repo | 726c1426d738b962cabeabd8995aa35767df0c41 | [
"BSD-2-Clause"
] | null | null | null | users/views/__init__.py | andywar65/rp_repo | 726c1426d738b962cabeabd8995aa35767df0c41 | [
"BSD-2-Clause"
] | null | null | null | from .account_views import *
from .contact_views import *
from .login_views import *
| 21.25 | 28 | 0.788235 | 12 | 85 | 5.333333 | 0.5 | 0.515625 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141176 | 85 | 3 | 29 | 28.333333 | 0.876712 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3e3a66776f1d67c83ef44aa183046694139fa464 | 12,874 | py | Python | MMO/problems/cdn.py | laloc2496/cdn_configuration_optimization | 58cf2278456d0ef8796570f12f1d00fd68aec686 | [
"MIT"
] | null | null | null | MMO/problems/cdn.py | laloc2496/cdn_configuration_optimization | 58cf2278456d0ef8796570f12f1d00fd68aec686 | [
"MIT"
] | null | null | null | MMO/problems/cdn.py | laloc2496/cdn_configuration_optimization | 58cf2278456d0ef8796570f12f1d00fd68aec686 | [
"MIT"
] | null | null | null | import autograd.numpy as anp
from .problem import Problem
from pymoo.util.normalization import normalize
import multiprocessing as mp
from simulation import *
# NUM_PROCESSORS = 4
import random, os, re
class CDN(Problem):
## 324, 5188
def __init__ (self, n_var=4, n_obj=2, n_constr=0, xl=10, xu=180, min_cost=0.5, transformToInteger=False):
super().__init__(n_var=n_var, n_obj=n_obj, n_constr=n_constr, xl=xl, xu=xu)
self.min_cost = min_cost
self.count_step = 0
self.transformToInteger = transformToInteger
def compute_y_bounds(self):
performance_lower = 1.0 * self.performance_function([self.xu])
cost_lower = self.cost_function([self.xl])
performance_upper = 1.0 * self.performance_function([self.xl])
cost_upper = self.cost_function([self.xu])
self.performance_bounds = [performance_lower, performance_upper]
self.cost_bounds = [cost_lower, cost_upper]
class CDN_PLACEMENT(CDN):
def __init__(self, n_var=4, n_obj=2, n_constr=0, xl=0, xu=1, min_cost=0.5, transformToInteger=False):
super().__init__(n_var=n_var, n_obj=n_obj, n_constr=n_constr, xl=xl, xu=xu, min_cost=min_cost, transformToInteger=transformToInteger)
def _calc_pareto_front(self, n_pareto_points=100):
raise "Not implement yet"
def _evaluate(self, x, out, *args, **kwargs):
x_temp = np.round(x.copy())
self.count_step += len(x_temp)
if self.n_process > 1:
performance = 1.0 * self.performance_function_parallel(x_temp)
else:
performance = 1.0 * self.performance_function(x_temp)
cost = self.cost_function(x_temp)
del x, x_temp
normalized_performance = (performance - np.ones(performance.shape) * self.performance_bounds[0]) / (self.performance_bounds[1] - self.performance_bounds[0])
normalized_cost = (cost - self.cost_bounds[0]) / (self.cost_bounds[1] - self.cost_bounds[0])
out["F"] = np.column_stack([normalized_performance, normalized_cost])
if not self.deleteCachePath is None:
for f in os.listdir(self.deleteCachePath):
if re.search("save_*", f):
os.remove(os.path.join(self.deleteCachePath, f))
for f in os.listdir(self.deleteCachePath):
if re.search("cacheDict*", f):
os.remove(os.path.join(self.deleteCachePath, f))
def get_parameters(self, topo, fileSize, mode, colorList, runReqNums, warmUpReqNums, separatorRankIncrement, n_process, deleteCachePath=None, interval=None):
self.topo = topo
self.fileSize = fileSize
self.mode = mode
self.colorList = colorList
self.runReqNums = runReqNums
self.warmUpReqNums = warmUpReqNums
self.separatorRankIncrement = separatorRankIncrement
self.n_process = n_process
self.deleteCachePath = deleteCachePath
self.interval = interval
savePredefinedContent = os.path.join(self.deleteCachePath, "content.pkl")
if interval is None:
if os.path.isfile(savePredefinedContent):
with open(savePredefinedContent, "rb") as f:
generateData = pickle.load(f)
else:
generateData = {}
for client in self.topo.clientIds:
cacheId = client.replace("client", "Cache")
generateData[cacheId] = {"noInterval": self.topo.contentGenerator.randomGen(self.runReqNums)}
with open(savePredefinedContent, "wb") as f:
pickle.dump(generateData, f)
self.generateData = generateData
with open(os.path.join(self.deleteCachePath, "save_-1.pkl"), "wb") as f:
pickle.dump(self.topo, f)
self.compute_y_bounds()
def performance_function_parallel(self, avaiableVector):
dataList = []
for i in range(len(avaiableVector)):
randomIdx = random.randint(10000, 99999)
save_data = [randomIdx, self.fileSize, self.mode, self.topo, self.colorList, self.runReqNums, self.warmUpReqNums, self.separatorRankIncrement, avaiableVector[i]]
with open(os.path.join(self.deleteCachePath, "save_" + str(randomIdx) + ".pkl"), "wb") as f:
pickle.dump(save_data, f)
dataList.append(randomIdx)
pool = mp.Pool(processes=self.n_process)
results = pool.map(self.process_compute_perforamnce, dataList)
pool.terminate()
del pool, dataList
return np.array(results)
def process_compute_perforamnce(self, randomIdx):
with open(os.path.join(self.deleteCachePath, "save_" + str(randomIdx) + ".pkl"), "rb") as f:
data = pickle.load(f)
idx, fileSize, mode, topo, colorList, runReqNums, warmUpReqNums, separatorRankIncrement, avaiableVector = data
topo.reconfigTopology(avaiableVector, idx)
routingTable = {}
if not self.interval is None:
traffic = runSimulationWithRealDataset(self.interval, fileSize, mode, routingTable, topo, colorList, runReqNums, warmUpReqNums, separatorRankIncrement)
else:
traffic = runSimulationWithPredefinedDistribution(fileSize, mode, routingTable, topo, colorList, runReqNums, warmUpReqNums, separatorRankIncrement, self.generateData, idx)
del data, fileSize, mode, topo, colorList, runReqNums, warmUpReqNums, separatorRankIncrement, avaiableVector
return int(traffic)
def performance_function(self, avaiableVector):
results = []
with open(os.path.join(self.deleteCachePath, "save_-1.pkl"), "rb") as f:
topo = pickle.load( f)
for i in range(len(avaiableVector)):
topo.reconfigTopology(avaiableVector[i], 0)
routingTable = {}
if not self.interval is None:
traffic = runSimulationWithRealDataset(self.interval, self.fileSize, self.mode, routingTable, topo, self.colorList, self.runReqNums, self.warmUpReqNums, self.separatorRankIncrement)
else:
traffic = runSimulationWithPredefinedDistribution(self.fileSize, self.mode, routingTable, topo, self.colorList, self.runReqNums, self.warmUpReqNums, self.separatorRankIncrement, self.generateData, 0)
results.append(traffic)
return np.array(results)
def cost_function(self, avaiableVector):
result = []
for i in range(len(avaiableVector)):
result.append(int(sum(avaiableVector[i])))
return np.array(result)
class CDN_RAM(CDN):
def _calc_pareto_front(self, n_pareto_points=100):
raise "Not implement yet"
def _evaluate(self, x, out, *args, **kwargs):
x_temp = np.round(x.copy())
self.count_step += len(x_temp)
if self.n_process > 1:
performance = 1.0 * self.performance_function_parallel(x_temp)
else:
performance = 1.0 * self.performance_function(x_temp)
cost = self.cost_function(x_temp)
del x, x_temp
normalized_performance = (performance - np.ones(performance.shape) * self.performance_bounds[0]) / (self.performance_bounds[1] - self.performance_bounds[0])
normalized_cost = (cost - self.cost_bounds[0]) / (self.cost_bounds[1] - self.cost_bounds[0])
out["F"] = np.column_stack([normalized_performance, normalized_cost])
# out["G"] = int(self.min_cost * (80*1024-10*1024) + 10 *1024) - cost
if not self.deleteCachePath is None:
for f in os.listdir(self.deleteCachePath):
if re.search("save_*", f):
os.remove(os.path.join(self.deleteCachePath, f))
for f in os.listdir(self.deleteCachePath):
if re.search("cacheDict*", f):
os.remove(os.path.join(self.deleteCachePath, f)) # "./config/sbd_custom"
def get_parameters(self, topo, fileSize, mode, colorList, runReqNums, warmUpReqNums, separatorRankIncrement, n_process, deleteCachePath=None, interval=None):
self.topo = topo
self.fileSize = fileSize
self.mode = mode
self.colorList = colorList
self.runReqNums = runReqNums
self.warmUpReqNums = warmUpReqNums
self.separatorRankIncrement = separatorRankIncrement
self.n_process = n_process
self.deleteCachePath = deleteCachePath
self.interval = interval
savePredefinedContent = os.path.join(self.deleteCachePath, "content.pkl")
if interval is None:
if os.path.isfile(savePredefinedContent):
with open(savePredefinedContent, "rb") as f:
generateData = pickle.load(f)
else:
generateData = {}
for client in self.topo.clientIds:
cacheId = client.replace("client", "Cache")
generateData[cacheId] = {"noInterval": self.topo.contentGenerator.randomGen(self.runReqNums)}
with open(savePredefinedContent, "wb") as f:
pickle.dump(generateData, f)
self.generateData = generateData
with open(os.path.join(self.deleteCachePath, "save_-1.pkl"), "wb") as f:
pickle.dump(self.topo, f)
self.compute_y_bounds()
def performance_function_parallel(self, cacheSizeFactorList):
dataList = []
for i in range(len(cacheSizeFactorList)):
temp = [0] * (len(cacheSizeFactorList[i]))
for j in range(len(cacheSizeFactorList[i])):
temp[j] = int(cacheSizeFactorList[i][j] * 1024) # 982013
randomIdx = random.randint(10000, 99999)
save_data = [randomIdx, self.fileSize, self.mode, self.topo, self.colorList, self.runReqNums, self.warmUpReqNums, self.separatorRankIncrement, temp]
with open(os.path.join(self.deleteCachePath, "save_" + str(randomIdx) + ".pkl"), "wb") as f:
pickle.dump(save_data, f)
dataList.append(randomIdx)
pool = mp.Pool(processes=self.n_process)
results = pool.map(self.process_compute_perforamnce, dataList)
pool.terminate()
del pool, dataList
return np.array(results)
def process_compute_perforamnce(self, randomIdx):
with open(os.path.join(self.deleteCachePath, "save_" + str(randomIdx) + ".pkl"), "rb") as f:
data = pickle.load(f)
idx, fileSize, mode, topo, colorList, runReqNums, warmUpReqNums, separatorRankIncrement, cacheSizeFactorList = data
topo.reconfigRam(cacheSizeFactorList, idx)
routingTable = {}
if not self.interval is None:
traffic = runSimulationWithRealDataset(self.interval, fileSize, mode, routingTable, topo, colorList, runReqNums, warmUpReqNums, separatorRankIncrement)
else:
traffic = runSimulationWithPredefinedDistribution(fileSize, mode, routingTable, topo, colorList, runReqNums, warmUpReqNums, separatorRankIncrement, self.generateData, idx)
del data, fileSize, mode, topo, colorList, runReqNums, warmUpReqNums, separatorRankIncrement, cacheSizeFactorList
return int(traffic)
def performance_function(self, cacheSizeFactorList):
with open(os.path.join(self.deleteCachePath, "save_-1.pkl"), "rb") as f:
topo = pickle.load( f)
results = []
for i in range(len(cacheSizeFactorList)):
temp = [0] * (len(cacheSizeFactorList[i]))
for j in range(len(cacheSizeFactorList[i])):
temp[j] = int(cacheSizeFactorList[i][j] * 1024) # * (800*1024-100*1024) + 100 *1024
topo.reconfigRam(temp, 0)
routingTable = {}
if not self.interval is None:
traffic = runSimulationWithRealDataset(self.interval, self.fileSize, self.mode, routingTable, topo, self.colorList, self.runReqNums, self.warmUpReqNums, self.separatorRankIncrement)
else:
traffic = runSimulationWithPredefinedDistribution(self.fileSize, self.mode, routingTable, topo, self.colorList, self.runReqNums, self.warmUpReqNums, self.separatorRankIncrement, self.generateData, 0)
results.append(traffic)
return np.array(results)
def cost_function(self, cacheSizeFactorList):
result = []
for i in range(len(cacheSizeFactorList)):
temp = [0] * (len(cacheSizeFactorList[i]))
for j in range(len(cacheSizeFactorList[i])):
temp[j] = int(cacheSizeFactorList[i][j]) # (800*1024-100*1024) + 100*1024
result.append(int(sum(temp)))
return np.array(result)
| 50.486275 | 215 | 0.638263 | 1,397 | 12,874 | 5.764495 | 0.118826 | 0.051906 | 0.017385 | 0.024339 | 0.879672 | 0.879672 | 0.853347 | 0.842916 | 0.83472 | 0.83472 | 0 | 0.017024 | 0.256253 | 12,874 | 254 | 216 | 50.685039 | 0.824021 | 0.01538 | 0 | 0.803738 | 0 | 0 | 0.018631 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.079439 | false | 0 | 0.028037 | 0 | 0.158879 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e4148907af5866ece673821e1227a4638a1661dd | 18,648 | py | Python | QSubspaceEigensolver.py | Lizt1996/VariationalQuantumGeneralizedEigensolver | 52debb39fe6e20029f1290fae2f51d90c45ab679 | [
"Apache-2.0"
] | null | null | null | QSubspaceEigensolver.py | Lizt1996/VariationalQuantumGeneralizedEigensolver | 52debb39fe6e20029f1290fae2f51d90c45ab679 | [
"Apache-2.0"
] | null | null | null | QSubspaceEigensolver.py | Lizt1996/VariationalQuantumGeneralizedEigensolver | 52debb39fe6e20029f1290fae2f51d90c45ab679 | [
"Apache-2.0"
] | null | null | null | from qiskit import *
from QMeasure import HadamardTest, HadamardTest_Analytical, state_backend
from QAnsatz import Ansatze
from QCircuit import MixedStateGenerationCircuit
from QHamiltonian import Hamiltonian_in_Pauli_String
import random
import numpy as np
class SubspaceEigSolverError(Exception):
pass
class SubspaceEigSolver:
"""
Hamiltonian is in the format of the sum of weighted Pauli string
L(x) = Tr(ha)
a = sum_{i=0}^{m-1} U(x)|i><i|U^{-1}(x)
h is the observable state which may be a mixed state
"""
def __init__(self, Hamiltonian: Hamiltonian_in_Pauli_String, ansatze: Ansatze, weight_list: list):
"""
Class of state subspace eigen solver.
:param Hamiltonian: H in Hx = 入x
:param ansatze: The quantum circuit network with respect to parameters.
:param weight_list: The guidance of subspace searching. c.f. Von Neumann theorem.
"""
self.state_scale = Hamiltonian.qubits
if len(weight_list) > 2 ** self.state_scale:
raise SubspaceEigSolverError('Error in StateSubspaceEigSolver! Incorrect weight list size')
self.ansatze = ansatze
self.weight_list = weight_list
self.Hamiltonian = Hamiltonian
def AnsatzStateGenerationCircuit(self, partial_flag: bool = False, pid: int = 0, pn=None):
return MixedStateGenerationCircuit(self.ansatze.circuit(partial_flag, pid, pn),
list(np.sqrt(abs(np.array(self.weight_list)))))
def LossFunctionAnalytical(self):
return self.Hamiltonian.ExpectationMeasurement(MeasurementMethod=HadamardTest_Analytical,
test_circuit=self.AnsatzStateGenerationCircuit(),
active_qubits=[i for i in range(self.state_scale)])
def PartialDerivativeAnalytical(self, pid):
"""
Partial derivative of parameter pid. (c.f. K. Mitarai, Quantum circuit learning)
:param pid: parameter identifier.
:return: Partial derivative = 1/2*ppd+1/2*npd.
"""
ppd = self.Hamiltonian.ExpectationMeasurement(MeasurementMethod=HadamardTest_Analytical,
test_circuit=self.AnsatzStateGenerationCircuit(partial_flag=True,
pid=pid,
pn='+'),
active_qubits=[i for i in range(self.state_scale)])
npd = self.Hamiltonian.ExpectationMeasurement(MeasurementMethod=HadamardTest_Analytical,
test_circuit=self.AnsatzStateGenerationCircuit(partial_flag=True,
pid=pid,
pn='-'),
active_qubits=[i for i in range(self.state_scale)])
return np.real(1 / 2 * ppd - 1 / 2 * npd)
def GetJacobianAnalytical(self, par: list):
if len(par) != self.ansatze.getParameterLength():
raise SubspaceEigSolverError(
'Error in SubspaceEigSolver GetJacobian! Incorrect parameter length')
self.setParameter(par)
jac = [0 for i in range(len(par))]
for i in range(len(par)):
jac[i] = self.PartialDerivativeAnalytical(i)
return np.array(jac)
def LossFunction(self, shots: int = 10000):
return self.Hamiltonian.ExpectationMeasurement(MeasurementMethod=HadamardTest_Analytical,
test_circuit=self.AnsatzStateGenerationCircuit(),
active_qubits=[i for i in range(self.state_scale)],
shots=shots)
def PartialDerivative(self, pid, shots: int = 10000):
"""
Partial derivative of parameter pid. (c.f. K. Mitarai, Quantum circuit learning)
:param pid: Parameter identifier.
:param shots: How many times the DSWAPT measure the density matrix product trace.
:return: Partial derivative = 1/2*ppd+1/2*npd.
"""
ppd = self.Hamiltonian.ExpectationMeasurement(MeasurementMethod=HadamardTest_Analytical,
test_circuit=self.AnsatzStateGenerationCircuit(partial_flag=True,
pid=pid,
pn='+'),
active_qubits=[i for i in range(self.state_scale)],
shots=shots)
npd = self.Hamiltonian.ExpectationMeasurement(MeasurementMethod=HadamardTest_Analytical,
test_circuit=self.AnsatzStateGenerationCircuit(partial_flag=True,
pid=pid,
pn='-'),
active_qubits=[i for i in range(self.state_scale)],
shots=shots)
return np.real(1 / 2 * ppd - 1 / 2 * npd)
def GetJacobian(self, par: list, shots: int = 10000):
if len(par) != self.ansatze.getParameterLength():
raise SubspaceEigSolverError(
'Error in SubspaceEigSolver GetJacobian! Incorrect parameter length')
self.setParameter(par)
jac = [0 for i in range(len(par))]
for i in range(len(par)):
jac[i] = self.PartialDerivative(i, shots)
return np.array(jac)
def EigTrace(self, getEigenstate: bool = False, getLossFunction: bool = False):
eigval = []
eigvec = []
lossfun = 0
initvec = np.zeros(2 ** self.state_scale)
for i in range(len(self.weight_list)):
initvec[i] = 1
check_circuit = QuantumCircuit(self.state_scale)
check_circuit.initialize(initvec, [i for i in range(self.state_scale)])
initvec[i] = 0
check_circuit.compose(self.ansatze.circuit(), [i for i in range(self.state_scale)], inplace=True)
eigval.append(self.Hamiltonian.ExpectationMeasurement(MeasurementMethod=HadamardTest_Analytical,
test_circuit=check_circuit,
active_qubits=[i for i in range(self.state_scale)]))
if getEigenstate:
job = execute(check_circuit, state_backend)
result = job.result()
eigvec.append(result.get_statevector(check_circuit, decimals=3))
if getLossFunction:
lossfun = self.LossFunctionAnalytical()
return {'eigval': eigval, 'eigvec': eigvec, 'lossfun': lossfun}
def setParameter(self, new_parameter: list):
self.ansatze.setParameter(new_parameter)
def getLossFunction(self, parameter: np.array):
p = [parameter[i] for i in range(self.ansatze.getParameter().__len__())]
self.setParameter(parameter)
return self.LossFunction()
def getLossFunctionAnalytical(self, parameter: np.array):
p = [parameter[i] for i in range(self.ansatze.getParameter().__len__())]
self.setParameter(parameter)
return self.LossFunctionAnalytical()
def getParameter(self):
return self.ansatze.getParameter()
def getEigenData(self, par, vector_required: bool = True, lossfun_required: bool = True):
self.setParameter(par)
return self.EigTrace(vector_required, lossfun_required)
def showStateVector(self, parameter: list):
self.setParameter(parameter)
backend = BasicAer.get_backend('statevector_simulator')
qc = self.ansatze.circuit()
print(qc.draw('text'))
job = execute(qc, backend)
result = job.result()
return result.get_statevector(qc, decimals=3)
class SubspaceEigSolver_ClassicalEfficientSimulator:
"""
Hamiltonian is in the format of the sum of weighted Pauli string
L(x) = Tr(ha)
a = sum_{i=0}^{m-1} U(x)|i><i|U^{-1}(x)
h is the observable state which may be a mixed state
"""
def __init__(self, Hamiltonian: Hamiltonian_in_Pauli_String, ansatze: Ansatze, weight_list: list):
"""
Class of state subspace eigen solver.
:param Hamiltonian: H in Hx = 入x
:param ansatze: The quantum circuit network with respect to parameters.
:param weight_list: The guidance of subspace searching. c.f. Von Neumann theorem.
"""
self.state_scale = Hamiltonian.qubits
if len(weight_list) > 2 ** self.state_scale:
raise SubspaceEigSolverError('Error in StateSubspaceEigSolver! Incorrect weight list size')
self.ansatze = ansatze
self.weight_list = weight_list
self.Hamiltonian = Hamiltonian
def LossFunctionAnalytical(self):
res = 0
for j in range(len(self.weight_list)):
initvec = np.zeros(2 ** self.state_scale)
initvec[j] = 1
check_circuit = QuantumCircuit(self.state_scale)
check_circuit.initialize(initvec, [i for i in range(self.state_scale)])
check_circuit.compose(self.ansatze.circuit(), [i for i in range(self.state_scale)], inplace=True)
job = execute(check_circuit, state_backend)
result = job.result()
state = result.get_statevector(check_circuit, decimals=3)
res += self.weight_list[j] * np.dot(np.dot(state.conj(), self.Hamiltonian.hamiltonian_mat), state)
return np.real(res)
def PartialDerivativeAnalytical(self, pid):
"""
Partial derivative of parameter pid. (c.f. K. Mitarai, Quantum circuit learning)
:param pid: parameter identifier.
:return: Partial derivative = 1/2*ppd-1/2*npd.
"""
ppd = 0
initvec = np.zeros(2 ** self.state_scale)
for j in range(len(self.weight_list)):
initvec[j] = 1
check_circuit = QuantumCircuit(self.state_scale)
check_circuit.initialize(initvec, [i for i in range(self.state_scale)])
check_circuit.compose(self.ansatze.circuit(partial_flag=True,
pid=pid,
pn='+'),
[i for i in range(self.state_scale)], inplace=True)
job = execute(check_circuit, state_backend)
result = job.result()
state = result.get_statevector(check_circuit)
ppd += self.weight_list[j] * np.dot(np.dot(state.conj(), self.Hamiltonian.hamiltonian_mat), state)
initvec[j] = 0
npd = 0
for j in range(len(self.weight_list)):
initvec[j] = 1
check_circuit = QuantumCircuit(self.state_scale)
check_circuit.initialize(initvec, [i for i in range(self.state_scale)])
check_circuit.compose(self.ansatze.circuit(partial_flag=True,
pid=pid,
pn='-'),
[i for i in range(self.state_scale)], inplace=True)
job = execute(check_circuit, state_backend)
result = job.result()
state = result.get_statevector(check_circuit)
npd += self.weight_list[j] * np.dot(np.dot(state.conj(), self.Hamiltonian.hamiltonian_mat), state)
initvec[j] = 0
return np.real(1 / 2 * ppd - 1 / 2 * npd)
def GetJacobianAnalytical(self, par: list):
if len(par) != self.ansatze.getParameterLength():
raise SubspaceEigSolverError(
'Error in SubspaceEigSolver GetJacobian! Incorrect parameter length')
self.setParameter(par)
jac = [0 for i in range(len(par))]
for i in range(len(par)):
jac[i] = self.PartialDerivativeAnalytical(i)
return np.array(jac)
def LossFunction(self, shots: int = 10000):
res = 0
for i in range(len(self.weight_list)):
initvec = np.zeros(2 ** self.state_scale)
initvec[i] = 1
check_circuit = QuantumCircuit(self.state_scale)
check_circuit.initialize(initvec, [i for i in range(self.state_scale)])
check_circuit.compose(self.ansatze.circuit(), [i for i in range(self.state_scale)], inplace=True)
res += self.weight_list[i] * self.Hamiltonian.ExpectationMeasurement(
MeasurementMethod=HadamardTest,
test_circuit=check_circuit,
active_qubits=[i for i in range(self.state_scale)],
shots=shots)
return res
def PartialDerivative(self, pid, shots: int = 10000):
"""
Partial derivative of parameter pid. (c.f. K. Mitarai, Quantum circuit learning)
:param pid: Parameter identifier.
:param shots: How many times the DSWAPT measure the density matrix product trace.
:return: Partial derivative = 1/2*ppd+1/2*npd.
"""
ppd = 0
for i in range(len(self.weight_list)):
initvec = np.zeros(2 ** self.state_scale)
initvec[i] = 1
check_circuit = QuantumCircuit(self.state_scale)
check_circuit.initialize(initvec, [i for i in range(self.state_scale)])
check_circuit.compose(self.ansatze.circuit(partial_flag=True,
pid=pid,
pn='+'),
[i for i in range(self.state_scale)], inplace=True)
ppd += self.weight_list[i] * self.Hamiltonian.ExpectationMeasurement(
MeasurementMethod=HadamardTest,
test_circuit=check_circuit,
active_qubits=[i for i in range(self.state_scale)],
shots=shots)
npd = 0
for i in range(len(self.weight_list)):
initvec = np.zeros(2 ** self.state_scale)
initvec[i] = 1
check_circuit = QuantumCircuit(self.state_scale)
check_circuit.initialize(initvec, [i for i in range(self.state_scale)])
check_circuit.compose(self.ansatze.circuit(partial_flag=True,
pid=pid,
pn='-'),
[i for i in range(self.state_scale)], inplace=True)
npd += self.weight_list[i] * self.Hamiltonian.ExpectationMeasurement(
MeasurementMethod=HadamardTest,
test_circuit=check_circuit,
active_qubits=[i for i in range(self.state_scale)],
shots=shots)
return np.real(1 / 2 * ppd - 1 / 2 * npd)
def GetJacobian(self, par: list, shots: int = 10000):
if len(par) != self.ansatze.getParameterLength():
raise SubspaceEigSolverError(
'Error in SubspaceEigSolver GetJacobian! Incorrect parameter length')
self.setParameter(par)
jac = [0 for i in range(len(par))]
for i in range(len(par)):
jac[i] = self.PartialDerivative(i, shots)
return np.array(jac)
def EigTrace(self, getEigenstate: bool = False, getLossFunction: bool = False):
eigval = []
eigvec = []
lossfun = 0
initvec = np.zeros(2 ** self.state_scale)
for i in range(len(self.weight_list)):
initvec[i] = 1
check_circuit = QuantumCircuit(self.state_scale)
check_circuit.initialize(initvec, [i for i in range(self.state_scale)])
initvec[i] = 0
check_circuit.compose(self.ansatze.circuit(), [i for i in range(self.state_scale)], inplace=True)
eigval.append(self.Hamiltonian.ExpectationMeasurement(MeasurementMethod=HadamardTest_Analytical,
test_circuit=check_circuit,
active_qubits=[i for i in range(self.state_scale)]))
if getEigenstate:
job = execute(check_circuit, state_backend)
result = job.result()
eigvec.append(result.get_statevector(check_circuit, decimals=3))
if getLossFunction:
lossfun = self.LossFunctionAnalytical()
return {'eigval': eigval, 'eigvec': eigvec, 'lossfun': lossfun}
def setParameter(self, new_parameter: list):
self.ansatze.setParameter(new_parameter)
def getLossFunction(self, parameter: np.array):
p = [parameter[i] for i in range(self.ansatze.getParameter().__len__())]
self.setParameter(parameter)
return self.LossFunction()
def getLossFunctionAnalytical(self, parameter: np.array):
p = [parameter[i] for i in range(self.ansatze.getParameter().__len__())]
self.setParameter(parameter)
return self.LossFunctionAnalytical()
def getParameter(self):
return self.ansatze.getParameter()
def getEigenData(self, par, vector_required: bool = True, lossfun_required: bool = True):
self.setParameter(par)
return self.EigTrace(vector_required, lossfun_required)
def showStateVector(self, parameter: list):
self.setParameter(parameter)
backend = BasicAer.get_backend('statevector_simulator')
qc = self.ansatze.circuit()
print(qc.draw('text'))
job = execute(qc, backend)
result = job.result()
return result.get_statevector(qc, decimals=3)
| 51.371901 | 120 | 0.555609 | 1,877 | 18,648 | 5.403303 | 0.093234 | 0.032439 | 0.063498 | 0.047722 | 0.943798 | 0.940939 | 0.940939 | 0.940051 | 0.940051 | 0.937192 | 0 | 0.008913 | 0.356231 | 18,648 | 362 | 121 | 51.513812 | 0.835902 | 0.086068 | 0 | 0.916968 | 0 | 0 | 0.029218 | 0.005379 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104693 | false | 0.00361 | 0.025271 | 0.018051 | 0.231047 | 0.00722 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e435eaabd725202ba57c58a7883b1943a5ad5e85 | 126,399 | py | Python | resources/mgltools_x86_64Linux2_1.5.6/MGLToolsPckgs/MolKit/data/all_dat.py | J-E-J-S/aaRS-Pipeline | 43f59f28ab06e4b16328c3bc405cdddc6e69ac44 | [
"MIT"
] | 9 | 2021-03-06T04:24:28.000Z | 2022-01-03T09:53:07.000Z | MolKit/data/all_dat.py | e-mayo/autodocktools-prepare-py3k | 2dd2316837bcb7c19384294443b2855e5ccd3e01 | [
"BSD-3-Clause"
] | 3 | 2021-03-07T05:37:16.000Z | 2021-09-19T15:06:54.000Z | MolKit/data/all_dat.py | e-mayo/autodocktools-prepare-py3k | 2dd2316837bcb7c19384294443b2855e5ccd3e01 | [
"BSD-3-Clause"
] | 4 | 2019-08-28T23:11:39.000Z | 2021-11-27T08:43:36.000Z | all_dat = {
"TYR": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"INTX,KFORM":['INT', '1'],
"HD2":{'torsion': 180.0, 'tree': 'E', 'NC': 16, 'NB': 19, 'NA': 21, 'I': 22, 'angle': 120.0, 'blen': 1.09, 'charge': 0.064, 'type': 'HC'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"OH":{'torsion': 180.0, 'tree': 'S', 'NC': 12, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 120.0, 'blen': 1.36, 'charge': -0.528, 'type': 'OH'},
"HD1":{'torsion': 0.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 12, 'I': 13, 'angle': 120.0, 'blen': 1.09, 'charge': 0.064, 'type': 'HC'},
"HE1":{'torsion': 180.0, 'tree': 'E', 'NC': 11, 'NB': 12, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.09, 'charge': 0.102, 'type': 'HC'},
"HE2":{'torsion': 180.0, 'tree': 'E', 'NC': 14, 'NB': 16, 'NA': 19, 'I': 20, 'angle': 120.0, 'blen': 1.09, 'charge': 0.102, 'type': 'HC'},
"CD2":{'torsion': 0.0, 'tree': 'S', 'NC': 14, 'NB': 16, 'NA': 19, 'I': 21, 'angle': 120.0, 'blen': 1.4, 'charge': -0.002, 'type': 'CA'},
"NAMRES":'TYROSINE',
"CD1":{'torsion': 180.0, 'tree': 'B', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 120.0, 'blen': 1.4, 'charge': -0.002, 'type': 'CA'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'CD1', 'HD1', 'CE1', 'HE1', 'CZ', 'OH', 'HH', 'CE2', 'HE2', 'CD2', 'HD2', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"CE1":{'torsion': 180.0, 'tree': 'B', 'NC': 8, 'NB': 11, 'NA': 12, 'I': 14, 'angle': 120.0, 'blen': 1.4, 'charge': -0.264, 'type': 'CA'},
"CE2":{'torsion': 0.0, 'tree': 'B', 'NC': 12, 'NB': 14, 'NA': 16, 'I': 19, 'angle': 120.0, 'blen': 1.4, 'charge': -0.264, 'type': 'CA'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"HH":{'torsion': 0.0, 'tree': 'E', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 18, 'angle': 113.0, 'blen': 0.96, 'charge': 0.334, 'type': 'HO'},
"CZ":{'torsion': 0.0, 'tree': 'B', 'NC': 11, 'NB': 12, 'NA': 14, 'I': 16, 'angle': 120.0, 'blen': 1.4, 'charge': 0.462, 'type': 'C'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 23, 'I': 24, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CG":{'torsion': 180.0, 'tree': 'S', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.51, 'charge': -0.03, 'type': 'CA'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.098, 'type': 'CT'},
"loopList":[['CG', 'CD2']],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 23, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
"NHE": { "N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"atNameList":['N', 'HN1', 'HN2'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.0000', '0.0000', '0.0000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.0000', '0.0000', '0.0000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.0000', '90.0000', '0.0000']],
"IFIXC,IOMIT,ISYMDU,IPOS":['CORRECT', 'OMIT', 'DU', 'BEG'],
"HN2":{'torsion': 180.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 119.8, 'blen': 1.01, 'charge': 0.2315, 'type': 'H'},
"impropTors":[['-M', 'HN1', 'N', 'HN2']],
"CUT":['0.00000'],
"HN1":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.2315, 'type': 'H'},
"NAMRES":'NH2 ENDING GROUP',
},
"ARG": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.056, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.056, 'type': 'HC'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O'], ['NE', 'NH1', 'CZ', 'NH2'], ['CD', 'CZ', 'NE', 'HE'], ['CZ', 'HH12', 'NH1', 'HH11'], ['CZ', 'HH22', 'NH2', 'HH21']],
"HH11":{'torsion': 0.0, 'tree': 'E', 'NC': 17, 'NB': 19, 'NA': 20, 'I': 21, 'angle': 119.8, 'blen': 1.01, 'charge': 0.361, 'type': 'H3'},
"HH12":{'torsion': 180.0, 'tree': 'E', 'NC': 17, 'NB': 19, 'NA': 20, 'I': 22, 'angle': 119.8, 'blen': 1.01, 'charge': 0.361, 'type': 'H3'},
"HH21":{'torsion': 0.0, 'tree': 'E', 'NC': 17, 'NB': 19, 'NA': 23, 'I': 24, 'angle': 119.8, 'blen': 1.01, 'charge': 0.361, 'type': 'H3'},
"HH22":{'torsion': 180.0, 'tree': 'E', 'NC': 17, 'NB': 19, 'NA': 23, 'I': 25, 'angle': 119.8, 'blen': 1.01, 'charge': 0.361, 'type': 'H3'},
"INTX,KFORM":['INT', '1'],
"NE":{'torsion': 180.0, 'tree': 'B', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 17, 'angle': 111.0, 'blen': 1.48, 'charge': -0.324, 'type': 'N2'},
"HG2":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.074, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"HD2":{'torsion': 300.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 15, 'angle': 109.5, 'blen': 1.09, 'charge': 0.133, 'type': 'HC'},
"HD3":{'torsion': 60.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 16, 'angle': 109.5, 'blen': 1.09, 'charge': 0.133, 'type': 'HC'},
"NAMRES":'ARGININE',
"HE":{'torsion': 0.0, 'tree': 'E', 'NC': 11, 'NB': 14, 'NA': 17, 'I': 18, 'angle': 118.5, 'blen': 1.01, 'charge': 0.269, 'type': 'H3'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'HG2', 'HG3', 'CD', 'HD2', 'HD3', 'NE', 'HE', 'CZ', 'NH1', 'HH11', 'HH12', 'NH2', 'HH21', 'HH22', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"NH2":{'torsion': 180.0, 'tree': 'B', 'NC': 14, 'NB': 17, 'NA': 19, 'I': 23, 'angle': 118.0, 'blen': 1.33, 'charge': -0.624, 'type': 'N2'},
"HG3":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 109.5, 'blen': 1.09, 'charge': 0.074, 'type': 'HC'},
"NH1":{'torsion': 0.0, 'tree': 'B', 'NC': 14, 'NB': 17, 'NA': 19, 'I': 20, 'angle': 122.0, 'blen': 1.33, 'charge': -0.624, 'type': 'N2'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"CZ":{'torsion': 180.0, 'tree': 'B', 'NC': 11, 'NB': 14, 'NA': 17, 'I': 19, 'angle': 123.0, 'blen': 1.33, 'charge': 0.76, 'type': 'CA'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"CD":{'torsion': 180.0, 'tree': '3', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 14, 'angle': 109.47, 'blen': 1.525, 'charge': -0.228, 'type': 'CT'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 26, 'I': 27, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CG":{'torsion': 180.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.525, 'charge': -0.103, 'type': 'CT'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.08, 'type': 'CT'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 26, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
"LEU": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.033, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.033, 'type': 'HC'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"INTX,KFORM":['INT', '1'],
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"NAMRES":'LEUCINE',
"HG":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.031, 'type': 'HC'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'HG', 'CD1', 'HD11', 'HD12', 'HD13', 'CD2', 'HD21', 'HD22', 'HD23', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HD11":{'torsion': 60.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 109.5, 'blen': 1.09, 'charge': 0.034, 'type': 'HC'},
"HD12":{'torsion': 180.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 13, 'I': 15, 'angle': 109.5, 'blen': 1.09, 'charge': 0.034, 'type': 'HC'},
"HD13":{'torsion': 300.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 13, 'I': 16, 'angle': 109.5, 'blen': 1.09, 'charge': 0.034, 'type': 'HC'},
"CD2":{'torsion': 180.0, 'tree': '3', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 17, 'angle': 109.47, 'blen': 1.525, 'charge': -0.107, 'type': 'CT'},
"CD1":{'torsion': 60.0, 'tree': '3', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 109.47, 'blen': 1.525, 'charge': -0.107, 'type': 'CT'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 21, 'I': 22, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CG":{'torsion': 180.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.525, 'charge': -0.01, 'type': 'CT'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.061, 'type': 'CT'},
"HD21":{'torsion': 60.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 17, 'I': 18, 'angle': 109.5, 'blen': 1.09, 'charge': 0.034, 'type': 'HC'},
"HD23":{'torsion': 300.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 17, 'I': 20, 'angle': 109.5, 'blen': 1.09, 'charge': 0.034, 'type': 'HC'},
"HD22":{'torsion': 180.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 17, 'I': 19, 'angle': 109.5, 'blen': 1.09, 'charge': 0.034, 'type': 'HC'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 21, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
"RURA": { "C4":{'torsion': 0.0, 'tree': 'B', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 18, 'angle': 120.78, 'blen': 1.44, 'charge': 0.834, 'type': 'C'},
"C5":{'torsion': 177.3, 'tree': 'B', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 16, 'angle': 121.22, 'blen': 1.34, 'charge': -0.529, 'type': 'CM'},
"C6":{'torsion': 81.59, 'tree': 'B', 'NC': 10, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 123.04, 'blen': 1.37, 'charge': 0.16, 'type': 'CM'},
"O3'":{'torsion': -203.47, 'tree': 'M', 'NC': 5, 'NB': 8, 'NA': 24, 'I': 30, 'angle': 116.52, 'blen': 1.42, 'charge': -0.509, 'type': 'OS'},
"H6":{'torsion': 0.0, 'tree': 'E', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.08, 'charge': 0.098, 'type': 'HC'},
"O2'":{'torsion': 240.0, 'tree': 'S', 'NC': 8, 'NB': 24, 'NA': 26, 'I': 28, 'angle': 109.5, 'blen': 1.43, 'charge': -0.546, 'type': 'OH'},
"C2":{'torsion': 0.0, 'tree': 'S', 'NC': 16, 'NB': 18, 'NA': 20, 'I': 22, 'angle': 126.46, 'blen': 1.38, 'charge': 0.775, 'type': 'C'},
"C2'":{'torsion': -86.3, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 24, 'I': 26, 'angle': 102.8, 'blen': 1.53, 'charge': 0.101, 'type': 'CT'},
"impropTors":[['N1', 'N3', 'C2', 'O2'], ['C5', 'N3', 'C4', 'O4'], ['C2', 'C4', 'N3', 'H3']],
"O1'":{'torsion': -86.31, 'tree': 'S', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 10, 'angle': 108.86, 'blen': 1.46, 'charge': -0.343, 'type': 'OS'},
"H5":{'torsion': 180.0, 'tree': 'E', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 120.0, 'blen': 1.09, 'charge': 0.146, 'type': 'HC'},
"INTX,KFORM":['INT', '1'],
"O5'":{'torsion': -98.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.6, 'charge': -0.509, 'type': 'OS'},
"N3":{'torsion': 0.0, 'tree': 'B', 'NC': 14, 'NB': 16, 'NA': 18, 'I': 20, 'angle': 114.07, 'blen': 1.38, 'charge': -0.768, 'type': 'NA'},
"H3":{'torsion': 180.0, 'tree': 'E', 'NC': 16, 'NB': 18, 'NA': 20, 'I': 21, 'angle': 116.77, 'blen': 1.09, 'charge': 0.334, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"loopList":[["C1'", "C2'"], ['C2', 'N1']],
"H3'":{'torsion': 30.0, 'tree': 'E', 'NC': 5, 'NB': 8, 'NA': 24, 'I': 25, 'angle': 109.5, 'blen': 1.09, 'charge': 0.007, 'type': 'HC'},
"NAMRES":'R-URACIL',
"H4'":{'torsion': -200.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.061, 'type': 'HC'},
"H1'":{'torsion': -240.0, 'tree': 'E', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.054, 'type': 'HC'},
"O2":{'torsion': 180.0, 'tree': 'E', 'NC': 18, 'NB': 20, 'NA': 22, 'I': 23, 'angle': 121.7, 'blen': 1.22, 'charge': -0.472, 'type': 'O'},
"atNameList":["O5'", "C5'", "H5'1", "H5'2", "C4'", "H4'", "O1'", "C1'", "H1'", 'N1', 'C6', 'H6', 'C5', 'H5', 'C4', 'O4', 'N3', 'H3', 'C2', 'O2', "C3'", "H3'", "C2'", "H2'", "O2'", "HO2'", "O3'"],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"H2'":{'torsion': 120.0, 'tree': 'E', 'NC': 8, 'NB': 24, 'NA': 26, 'I': 27, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"N1":{'torsion': -127.7, 'tree': 'S', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 13, 'angle': 109.59, 'blen': 1.53, 'charge': -0.159, 'type': 'N*'},
"CUT":['0.00000'],
"H5'2":{'torsion': -280.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"H5'1":{'torsion': -160.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"C4'":{'torsion': -151.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 8, 'angle': 110.0, 'blen': 1.52, 'charge': 0.1, 'type': 'CT'},
"C5'":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.0, 'blen': 1.44, 'charge': 0.18, 'type': 'CT'},
"C1'":{'torsion': 105.6, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 110.04, 'blen': 1.42, 'charge': 0.117, 'type': 'CT'},
"C3'":{'torsion': -329.11, 'tree': 'M', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 24, 'angle': 115.78, 'blen': 1.53, 'charge': 0.303, 'type': 'CT'},
"HO2'":{'torsion': 180.0, 'tree': 'E', 'NC': 24, 'NB': 26, 'NA': 28, 'I': 29, 'angle': 107.0, 'blen': 0.96, 'charge': 0.324, 'type': 'HO'},
"O4":{'torsion': 180.0, 'tree': 'E', 'NC': 14, 'NB': 16, 'NA': 18, 'I': 19, 'angle': 125.35, 'blen': 1.23, 'charge': -0.474, 'type': 'O'},
},
"NME": { "CT":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.067, 'type': 'CT'},
"atNameList":['N', 'H', 'CT', 'HT1', 'HT2', 'HT3'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HT1":{'torsion': 0.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"HT2":{'torsion': 120.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"HT3":{'torsion': 240.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CUT":['0.00000'],
"NAMRES":'N-methyl all atom',
},
"MET": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.027, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.027, 'type': 'HC'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 21, 'I': 22, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"SD":{'torsion': 180.0, 'tree': '3', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 14, 'angle': 110.0, 'blen': 1.81, 'charge': 0.737, 'type': 'S'},
"LP1":{'torsion': 60.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 15, 'angle': 96.7, 'blen': 0.679, 'charge': -0.381, 'type': 'LP'},
"HG3":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 109.5, 'blen': 1.09, 'charge': 0.0652, 'type': 'HC'},
"HG2":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.0652, 'type': 'HC'},
"INTX,KFORM":['INT', '1'],
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"HE3":{'torsion': 300.0, 'tree': 'E', 'NC': 11, 'NB': 14, 'NA': 17, 'I': 20, 'angle': 109.5, 'blen': 1.09, 'charge': 0.0652, 'type': 'HC'},
"HE2":{'torsion': 180.0, 'tree': 'E', 'NC': 11, 'NB': 14, 'NA': 17, 'I': 19, 'angle': 109.5, 'blen': 1.09, 'charge': 0.0652, 'type': 'HC'},
"HE1":{'torsion': 60.0, 'tree': 'E', 'NC': 11, 'NB': 14, 'NA': 17, 'I': 18, 'angle': 109.5, 'blen': 1.09, 'charge': 0.0652, 'type': 'HC'},
"NAMRES":'METHIONINE',
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'HG2', 'HG3', 'SD', 'LP1', 'LP2', 'CE', 'HE1', 'HE2', 'HE3', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"CE":{'torsion': 180.0, 'tree': '3', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 17, 'angle': 100.0, 'blen': 1.78, 'charge': -0.134, 'type': 'CT'},
"CG":{'torsion': 180.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.525, 'charge': -0.054, 'type': 'CT'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.151, 'type': 'CT'},
"LP2":{'torsion': 300.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 16, 'angle': 96.7, 'blen': 0.679, 'charge': -0.381, 'type': 'LP'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 21, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
"IDBGEN,IREST,ITYPF":['1', '0', '2'],
"ALA": { "HB2":{'torsion': 180.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HB3":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB1', 'HB2', 'HB3', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HB1":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 12, 'I': 13, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.098, 'type': 'CT'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 12, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"NAMRES":'ALANINE',
},
"PHE": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.108, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.108, 'type': 'HC'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"INTX,KFORM":['INT', '1'],
"HD2":{'torsion': 180.0, 'tree': 'E', 'NC': 16, 'NB': 18, 'NA': 20, 'I': 21, 'angle': 120.0, 'blen': 1.09, 'charge': 0.15, 'type': 'HC'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"HD1":{'torsion': 0.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 12, 'I': 13, 'angle': 120.0, 'blen': 1.09, 'charge': 0.15, 'type': 'HC'},
"HE1":{'torsion': 180.0, 'tree': 'E', 'NC': 11, 'NB': 12, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.09, 'charge': 0.15, 'type': 'HC'},
"HE2":{'torsion': 180.0, 'tree': 'E', 'NC': 14, 'NB': 16, 'NA': 18, 'I': 19, 'angle': 120.0, 'blen': 1.09, 'charge': 0.15, 'type': 'HC'},
"CD2":{'torsion': 0.0, 'tree': 'S', 'NC': 14, 'NB': 16, 'NA': 18, 'I': 20, 'angle': 120.0, 'blen': 1.4, 'charge': -0.15, 'type': 'CA'},
"NAMRES":'PHENYLALANINE',
"CD1":{'torsion': 180.0, 'tree': 'B', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 120.0, 'blen': 1.4, 'charge': -0.15, 'type': 'CA'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'CD1', 'HD1', 'CE1', 'HE1', 'CZ', 'HZ', 'CE2', 'HE2', 'CD2', 'HD2', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"CE1":{'torsion': 180.0, 'tree': 'B', 'NC': 8, 'NB': 11, 'NA': 12, 'I': 14, 'angle': 120.0, 'blen': 1.4, 'charge': -0.15, 'type': 'CA'},
"CE2":{'torsion': 0.0, 'tree': 'B', 'NC': 12, 'NB': 14, 'NA': 16, 'I': 18, 'angle': 120.0, 'blen': 1.4, 'charge': -0.15, 'type': 'CA'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"CZ":{'torsion': 0.0, 'tree': 'B', 'NC': 11, 'NB': 12, 'NA': 14, 'I': 16, 'angle': 120.0, 'blen': 1.4, 'charge': -0.15, 'type': 'CA'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 22, 'I': 23, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CG":{'torsion': 180.0, 'tree': 'S', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 115.0, 'blen': 1.51, 'charge': -0.1, 'type': 'CA'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.1, 'type': 'CT'},
"loopList":[['CG', 'CD2']],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 22, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"HZ":{'torsion': 180.0, 'tree': 'E', 'NC': 12, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 120.0, 'blen': 1.09, 'charge': 0.15, 'type': 'HC'},
},
"ROHE": { "INTX,KFORM":['INT', '1'],
"atNameList":['O', 'H'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"H":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.0, 'blen': 0.96, 'charge': 0.226, 'type': 'HO'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"O":{'torsion': -78.6, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.6, 'charge': -0.621, 'type': 'OH'},
"CUT":['0.00000'],
"NAMRES":'R-OH END',
},
"NAMDBF":'db4.dat',
"SER": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.119, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.119, 'type': 'HC'},
"HG":{'torsion': 180.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 109.47, 'blen': 0.96, 'charge': 0.31, 'type': 'HO'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'OG', 'HG', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 13, 'I': 14, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': 0.018, 'type': 'CT'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"OG":{'torsion': 180.0, 'tree': 'S', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.43, 'charge': -0.55, 'type': 'OH'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 13, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"NAMRES":'SERINE',
},
"RADE": { "H2":{'torsion': 180.0, 'tree': 'E', 'NC': 18, 'NB': 22, 'NA': 23, 'I': 24, 'angle': 120.0, 'blen': 1.08, 'charge': -0.032, 'type': 'HC'},
"C5":{'torsion': 0.0, 'tree': 'S', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 104.0, 'blen': 1.39, 'charge': -0.097, 'type': 'CB'},
"C6":{'torsion': 180.0, 'tree': 'B', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 18, 'angle': 132.42, 'blen': 1.4, 'charge': 0.769, 'type': 'CA'},
"O3'":{'torsion': -203.47, 'tree': 'M', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 33, 'angle': 116.52, 'blen': 1.42, 'charge': -0.509, 'type': 'OS'},
"H62":{'torsion': 180.0, 'tree': 'E', 'NC': 17, 'NB': 18, 'NA': 19, 'I': 21, 'angle': 120.0, 'blen': 1.01, 'charge': 0.335, 'type': 'H2'},
"O2'":{'torsion': 240.0, 'tree': 'S', 'NC': 8, 'NB': 27, 'NA': 29, 'I': 31, 'angle': 109.5, 'blen': 1.43, 'charge': -0.546, 'type': 'OH'},
"C2":{'torsion': 0.0, 'tree': 'B', 'NC': 17, 'NB': 18, 'NA': 22, 'I': 23, 'angle': 118.8, 'blen': 1.33, 'charge': 0.661, 'type': 'CQ'},
"H61":{'torsion': 0.0, 'tree': 'E', 'NC': 17, 'NB': 18, 'NA': 19, 'I': 20, 'angle': 120.0, 'blen': 1.01, 'charge': 0.324, 'type': 'H2'},
"C2'":{'torsion': -86.3, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 29, 'angle': 102.8, 'blen': 1.53, 'charge': 0.101, 'type': 'CT'},
"impropTors":[['C6', 'H61', 'N6', 'H62']],
"C8":{'torsion': 81.59, 'tree': 'B', 'NC': 10, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 131.2, 'blen': 1.37, 'charge': 0.263, 'type': 'CK'},
"O1'":{'torsion': -86.31, 'tree': 'S', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 10, 'angle': 108.86, 'blen': 1.46, 'charge': -0.343, 'type': 'OS'},
"INTX,KFORM":['INT', '1'],
"O5'":{'torsion': -98.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.6, 'charge': -0.509, 'type': 'OS'},
"N7":{'torsion': 177.0, 'tree': 'S', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 16, 'angle': 113.93, 'blen': 1.3, 'charge': -0.543, 'type': 'NB'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"C4":{'torsion': 0.0, 'tree': 'E', 'NC': 22, 'NB': 23, 'NA': 25, 'I': 26, 'angle': 110.8, 'blen': 1.35, 'charge': 0.546, 'type': 'CB'},
"loopList":[["C1'", "C2'"], ['C4', 'C5'], ['C4', 'N9']],
"H3'":{'torsion': -80.0, 'tree': 'E', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 28, 'angle': 109.5, 'blen': 1.09, 'charge': 0.007, 'type': 'HC'},
"NAMRES":'R-ADENOSINE',
"H8":{'torsion': 0.0, 'tree': 'E', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.08, 'charge': 0.062, 'type': 'HC'},
"H4'":{'torsion': -200.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.061, 'type': 'HC'},
"H1'":{'torsion': -240.0, 'tree': 'E', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.054, 'type': 'HC'},
"N6":{'torsion': 0.0, 'tree': 'B', 'NC': 16, 'NB': 17, 'NA': 18, 'I': 19, 'angle': 123.5, 'blen': 1.34, 'charge': -0.768, 'type': 'N2'},
"atNameList":["O5'", "C5'", "H5'1", "H5'2", "C4'", "H4'", "O1'", "C1'", "H1'", 'N9', 'C8', 'H8', 'N7', 'C5', 'C6', 'N6', 'H61', 'H62', 'N1', 'C2', 'H2', 'N3', 'C4', "C3'", "H3'", "C2'", "H2'", "O2'", "HO2'", "O3'"],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"H2'":{'torsion': 120.0, 'tree': 'E', 'NC': 8, 'NB': 27, 'NA': 29, 'I': 30, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"N1":{'torsion': 180.0, 'tree': 'S', 'NC': 16, 'NB': 17, 'NA': 18, 'I': 22, 'angle': 117.43, 'blen': 1.34, 'charge': -0.774, 'type': 'NC'},
"H5'2":{'torsion': 90.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"H5'1":{'torsion': -30.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"C4'":{'torsion': -151.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 8, 'angle': 110.0, 'blen': 1.52, 'charge': 0.1, 'type': 'CT'},
"N9":{'torsion': -127.7, 'tree': 'S', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 13, 'angle': 109.59, 'blen': 1.52, 'charge': -0.073, 'type': 'N*'},
"C5'":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.0, 'blen': 1.44, 'charge': 0.18, 'type': 'CT'},
"C1'":{'torsion': 105.6, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 110.04, 'blen': 1.42, 'charge': 0.117, 'type': 'CT'},
"C3'":{'torsion': -329.11, 'tree': 'M', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 27, 'angle': 115.78, 'blen': 1.53, 'charge': 0.303, 'type': 'CT'},
"N3":{'torsion': 0.0, 'tree': 'S', 'NC': 18, 'NB': 22, 'NA': 23, 'I': 25, 'angle': 129.17, 'blen': 1.32, 'charge': -0.728, 'type': 'NC'},
"HO2'":{'torsion': 180.0, 'tree': 'E', 'NC': 27, 'NB': 29, 'NA': 31, 'I': 32, 'angle': 107.0, 'blen': 0.96, 'charge': 0.324, 'type': 'HO'},
"CUT":['0.00000'],
},
"DTHY": { "H71":{'torsion': 60.0, 'tree': 'E', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 18, 'angle': 109.5, 'blen': 1.09, 'charge': 0.114, 'type': 'HC'},
"C5":{'torsion': 177.3, 'tree': 'B', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 16, 'angle': 121.22, 'blen': 1.34, 'charge': -0.176, 'type': 'CM'},
"C6":{'torsion': 81.59, 'tree': 'B', 'NC': 10, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 123.04, 'blen': 1.37, 'charge': 0.034, 'type': 'CM'},
"H72":{'torsion': 180.0, 'tree': 'E', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 19, 'angle': 109.5, 'blen': 1.09, 'charge': 0.114, 'type': 'HC'},
"H6":{'torsion': 0.0, 'tree': 'E', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.08, 'charge': 0.134, 'type': 'HC'},
"O3'":{'torsion': -203.47, 'tree': 'M', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 32, 'angle': 116.52, 'blen': 1.42, 'charge': -0.509, 'type': 'OS'},
"C2":{'torsion': 0.0, 'tree': 'S', 'NC': 16, 'NB': 21, 'NA': 23, 'I': 25, 'angle': 126.46, 'blen': 1.38, 'charge': 0.849, 'type': 'C'},
"N3":{'torsion': 0.0, 'tree': 'B', 'NC': 14, 'NB': 16, 'NA': 21, 'I': 23, 'angle': 114.07, 'blen': 1.38, 'charge': -0.851, 'type': 'NA'},
"C2'":{'torsion': -86.3, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 29, 'angle': 102.8, 'blen': 1.53, 'charge': -0.307, 'type': 'CT'},
"H2'1":{'torsion': 120.0, 'tree': 'E', 'NC': 8, 'NB': 27, 'NA': 29, 'I': 30, 'angle': 109.5, 'blen': 1.09, 'charge': 0.081, 'type': 'HC'},
"H2'2":{'torsion': 240.0, 'tree': 'E', 'NC': 8, 'NB': 27, 'NA': 29, 'I': 31, 'angle': 109.5, 'blen': 1.09, 'charge': 0.081, 'type': 'HC'},
"O1'":{'torsion': -86.31, 'tree': 'S', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 10, 'angle': 108.86, 'blen': 1.46, 'charge': -0.368, 'type': 'OS'},
"INTX,KFORM":['INT', '1'],
"O5'":{'torsion': -98.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.6, 'charge': -0.509, 'type': 'OS'},
"O4":{'torsion': 180.0, 'tree': 'E', 'NC': 14, 'NB': 16, 'NA': 21, 'I': 22, 'angle': 125.35, 'blen': 1.23, 'charge': -0.464, 'type': 'O'},
"H3":{'torsion': 180.0, 'tree': 'E', 'NC': 16, 'NB': 21, 'NA': 23, 'I': 24, 'angle': 116.77, 'blen': 1.09, 'charge': 0.355, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"C4":{'torsion': 0.0, 'tree': 'B', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 21, 'angle': 120.78, 'blen': 1.44, 'charge': 0.809, 'type': 'C'},
"H73":{'torsion': 300.0, 'tree': 'E', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 20, 'angle': 109.5, 'blen': 1.09, 'charge': 0.114, 'type': 'HC'},
"H3'":{'torsion': 30.0, 'tree': 'E', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 28, 'angle': 109.5, 'blen': 1.09, 'charge': 0.025, 'type': 'HC'},
"NAMRES":'D-THY',
"C7":{'torsion': 180.0, 'tree': '3', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 121.63, 'blen': 1.5, 'charge': -0.382, 'type': 'CT'},
"H4'":{'torsion': -200.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.056, 'type': 'HC'},
"H1'":{'torsion': -240.0, 'tree': 'E', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.009, 'type': 'HC'},
"O2":{'torsion': 180.0, 'tree': 'E', 'NC': 21, 'NB': 23, 'NA': 25, 'I': 26, 'angle': 121.7, 'blen': 1.22, 'charge': -0.488, 'type': 'O'},
"atNameList":["O5'", "C5'", "H5'1", "H5'2", "C4'", "H4'", "O1'", "C1'", "H1'", 'N1', 'C6', 'H6', 'C5', 'C7', 'H71', 'H72', 'H73', 'C4', 'O4', 'N3', 'H3', 'C2', 'O2', "C3'", "H3'", "C2'", "H2'1", "H2'2", "O3'"],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"N1":{'torsion': -127.7, 'tree': 'S', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 13, 'angle': 109.59, 'blen': 1.53, 'charge': -0.217, 'type': 'N*'},
"H5'2":{'torsion': -280.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.021, 'type': 'HC'},
"H5'1":{'torsion': -160.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 109.5, 'blen': 1.09, 'charge': 0.021, 'type': 'HC'},
"C4'":{'torsion': -151.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 8, 'angle': 110.0, 'blen': 1.52, 'charge': 0.036, 'type': 'CT'},
"C5'":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.0, 'blen': 1.44, 'charge': 0.118, 'type': 'CT'},
"C1'":{'torsion': 105.6, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 110.04, 'blen': 1.42, 'charge': 0.376, 'type': 'CT'},
"C3'":{'torsion': -329.11, 'tree': 'M', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 27, 'angle': 115.78, 'blen': 1.53, 'charge': 0.233, 'type': 'CT'},
"loopList":[["C1'", "C2'"], ['C2', 'N1']],
"CUT":['0.00000'],
"impropTors":[['N1', 'N3', 'C2', 'O2'], ['C5', 'N3', 'C4', 'O4'], ['C2', 'C4', 'N3', 'H3']],
},
"DGUA": { "C4":{'torsion': 0.0, 'tree': 'E', 'NC': 20, 'NB': 22, 'NA': 26, 'I': 27, 'angle': 112.2, 'blen': 1.36, 'charge': 0.391, 'type': 'CB'},
"C5":{'torsion': 0.0, 'tree': 'S', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 103.9, 'blen': 1.39, 'charge': -0.06, 'type': 'CB'},
"C6":{'torsion': 180.0, 'tree': 'B', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 18, 'angle': 130.4, 'blen': 1.42, 'charge': 0.69, 'type': 'C'},
"O3'":{'torsion': -203.47, 'tree': 'M', 'NC': 5, 'NB': 8, 'NA': 28, 'I': 33, 'angle': 116.52, 'blen': 1.42, 'charge': -0.509, 'type': 'OS'},
"impropTors":[['C5', 'N1', 'C6', 'O6'], ['C6', 'C2', 'N1', 'H1'], ['C2', 'H21', 'N2', 'H22']],
"C2":{'torsion': -0.1, 'tree': 'B', 'NC': 17, 'NB': 18, 'NA': 20, 'I': 22, 'angle': 125.24, 'blen': 1.38, 'charge': 0.871, 'type': 'CA'},
"N3":{'torsion': 0.0, 'tree': 'S', 'NC': 18, 'NB': 20, 'NA': 22, 'I': 26, 'angle': 123.3, 'blen': 1.33, 'charge': -0.709, 'type': 'NC'},
"C2'":{'torsion': -86.3, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 28, 'I': 30, 'angle': 102.8, 'blen': 1.53, 'charge': -0.307, 'type': 'CT'},
"H1":{'torsion': 179.9, 'tree': 'E', 'NC': 17, 'NB': 18, 'NA': 20, 'I': 21, 'angle': 117.36, 'blen': 1.0, 'charge': 0.336, 'type': 'H'},
"C8":{'torsion': 81.59, 'tree': 'B', 'NC': 10, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 129.2, 'blen': 1.38, 'charge': 0.266, 'type': 'CK'},
"O1'":{'torsion': -86.31, 'tree': 'S', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 10, 'angle': 108.86, 'blen': 1.46, 'charge': -0.368, 'type': 'OS'},
"INTX,KFORM":['INT', '1'],
"O5'":{'torsion': -98.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.6, 'charge': -0.509, 'type': 'OS'},
"N7":{'torsion': -179.9, 'tree': 'S', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 16, 'angle': 114.0, 'blen': 1.31, 'charge': -0.543, 'type': 'NB'},
"H2'2":{'torsion': 240.0, 'tree': 'E', 'NC': 8, 'NB': 28, 'NA': 30, 'I': 32, 'angle': 109.5, 'blen': 1.09, 'charge': 0.081, 'type': 'HC'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"N1":{'torsion': 180.0, 'tree': 'B', 'NC': 16, 'NB': 17, 'NA': 18, 'I': 20, 'angle': 111.38, 'blen': 1.4, 'charge': -0.729, 'type': 'NA'},
"loopList":[["C1'", "C2'"], ['C4', 'C5'], ['C4', 'N9']],
"H21":{'torsion': -0.82, 'tree': 'E', 'NC': 20, 'NB': 22, 'NA': 23, 'I': 24, 'angle': 127.0, 'blen': 1.01, 'charge': 0.325, 'type': 'H2'},
"NAMRES":'D-GUANOSINE',
"H8":{'torsion': 0.0, 'tree': 'E', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.08, 'charge': 0.046, 'type': 'HC'},
"H4'":{'torsion': -200.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.056, 'type': 'HC'},
"H1'":{'torsion': -240.0, 'tree': 'E', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.009, 'type': 'HC'},
"atNameList":["O5'", "C5'", "H5'1", "H5'2", "C4'", "H4'", "O1'", "C1'", "H1'", 'N9', 'C8', 'H8', 'N7', 'C5', 'C6', 'O6', 'N1', 'H1', 'C2', 'N2', 'H21', 'H22', 'N3', 'C4', "C3'", "H3'", "C2'", "H2'1", "H2'2", "O3'"],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"N2":{'torsion': 180.0, 'tree': 'B', 'NC': 18, 'NB': 20, 'NA': 22, 'I': 23, 'angle': 116.02, 'blen': 1.34, 'charge': -0.778, 'type': 'N2'},
"O6":{'torsion': 0.0, 'tree': 'E', 'NC': 16, 'NB': 17, 'NA': 18, 'I': 19, 'angle': 128.8, 'blen': 1.23, 'charge': -0.458, 'type': 'O'},
"H5'2":{'torsion': 90.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.021, 'type': 'HC'},
"H5'1":{'torsion': -30.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 109.5, 'blen': 1.09, 'charge': 0.021, 'type': 'HC'},
"H3'":{'torsion': -320.0, 'tree': 'E', 'NC': 5, 'NB': 8, 'NA': 28, 'I': 29, 'angle': 109.5, 'blen': 1.09, 'charge': 0.025, 'type': 'HC'},
"C4'":{'torsion': -151.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 8, 'angle': 110.0, 'blen': 1.52, 'charge': 0.036, 'type': 'CT'},
"H2'1":{'torsion': 120.0, 'tree': 'E', 'NC': 8, 'NB': 28, 'NA': 30, 'I': 31, 'angle': 109.5, 'blen': 1.09, 'charge': 0.081, 'type': 'HC'},
"N9":{'torsion': -127.7, 'tree': 'S', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 13, 'angle': 108.06, 'blen': 1.49, 'charge': -0.042, 'type': 'N*'},
"C5'":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.0, 'blen': 1.44, 'charge': 0.118, 'type': 'CT'},
"H22":{'torsion': -179.44, 'tree': 'E', 'NC': 20, 'NB': 22, 'NA': 23, 'I': 25, 'angle': 116.53, 'blen': 1.01, 'charge': 0.339, 'type': 'H2'},
"C1'":{'torsion': 105.6, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 110.04, 'blen': 1.42, 'charge': 0.376, 'type': 'CT'},
"C3'":{'torsion': -329.11, 'tree': 'M', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 28, 'angle': 115.78, 'blen': 1.53, 'charge': 0.233, 'type': 'CT'},
"CUT":['0.00000'],
},
"DPOM": { "INTX,KFORM":['INT', '1'],
"OB":{'torsion': -342.91, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 109.58, 'blen': 1.48, 'charge': -0.847, 'type': 'O2'},
"atNameList":['P', 'OA', 'OB'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CUT":['0.00000'],
"OA":{'torsion': -214.89, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 109.61, 'blen': 1.48, 'charge': -0.847, 'type': 'O2'},
"P":{'torsion': -200.85, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 119.04, 'blen': 1.6, 'charge': 1.385, 'type': 'P'},
"NAMRES":'PHOSMI D',
},
"RGUA": { "C4":{'torsion': 0.0, 'tree': 'E', 'NC': 20, 'NB': 22, 'NA': 26, 'I': 27, 'angle': 112.2, 'blen': 1.36, 'charge': 0.391, 'type': 'CB'},
"C5":{'torsion': 0.0, 'tree': 'S', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 103.9, 'blen': 1.39, 'charge': -0.06, 'type': 'CB'},
"C6":{'torsion': 180.0, 'tree': 'B', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 18, 'angle': 130.4, 'blen': 1.42, 'charge': 0.69, 'type': 'C'},
"O3'":{'torsion': -203.47, 'tree': 'M', 'NC': 5, 'NB': 8, 'NA': 28, 'I': 34, 'angle': 116.52, 'blen': 1.42, 'charge': -0.509, 'type': 'OS'},
"O2'":{'torsion': 240.0, 'tree': 'S', 'NC': 8, 'NB': 28, 'NA': 30, 'I': 32, 'angle': 109.5, 'blen': 1.43, 'charge': -0.546, 'type': 'OH'},
"impropTors":[['C5', 'N1', 'C6', 'O6'], ['C6', 'C2', 'N1', 'H1'], ['C2', 'H21', 'N2', 'H22']],
"C2":{'torsion': -0.1, 'tree': 'B', 'NC': 17, 'NB': 18, 'NA': 20, 'I': 22, 'angle': 125.24, 'blen': 1.38, 'charge': 0.871, 'type': 'CA'},
"N3":{'torsion': 0.0, 'tree': 'S', 'NC': 18, 'NB': 20, 'NA': 22, 'I': 26, 'angle': 123.3, 'blen': 1.33, 'charge': -0.709, 'type': 'NC'},
"C2'":{'torsion': -86.3, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 28, 'I': 30, 'angle': 102.8, 'blen': 1.53, 'charge': 0.101, 'type': 'CT'},
"H1":{'torsion': 179.9, 'tree': 'E', 'NC': 17, 'NB': 18, 'NA': 20, 'I': 21, 'angle': 117.36, 'blen': 1.0, 'charge': 0.336, 'type': 'H'},
"C8":{'torsion': 81.59, 'tree': 'B', 'NC': 10, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 129.2, 'blen': 1.38, 'charge': 0.266, 'type': 'CK'},
"O1'":{'torsion': -86.31, 'tree': 'S', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 10, 'angle': 108.86, 'blen': 1.46, 'charge': -0.343, 'type': 'OS'},
"INTX,KFORM":['INT', '1'],
"O5'":{'torsion': -98.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.6, 'charge': -0.509, 'type': 'OS'},
"N7":{'torsion': -179.9, 'tree': 'S', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 16, 'angle': 114.0, 'blen': 1.31, 'charge': -0.543, 'type': 'NB'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"N1":{'torsion': 180.0, 'tree': 'B', 'NC': 16, 'NB': 17, 'NA': 18, 'I': 20, 'angle': 111.38, 'blen': 1.4, 'charge': -0.729, 'type': 'NA'},
"loopList":[["C1'", "C2'"], ['C4', 'C5'], ['C4', 'N9']],
"H21":{'torsion': -0.82, 'tree': 'E', 'NC': 20, 'NB': 22, 'NA': 23, 'I': 24, 'angle': 127.0, 'blen': 1.01, 'charge': 0.325, 'type': 'H2'},
"NAMRES":'R-GUANOSINE',
"H8":{'torsion': 0.0, 'tree': 'E', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.08, 'charge': 0.046, 'type': 'HC'},
"H2'":{'torsion': 120.0, 'tree': 'E', 'NC': 8, 'NB': 28, 'NA': 30, 'I': 31, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"H4'":{'torsion': -200.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.061, 'type': 'HC'},
"H1'":{'torsion': -240.0, 'tree': 'E', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.054, 'type': 'HC'},
"atNameList":["O5'", "C5'", "H5'1", "H5'2", "C4'", "H4'", "O1'", "C1'", "H1'", 'N9', 'C8', 'H8', 'N7', 'C5', 'C6', 'O6', 'N1', 'H1', 'C2', 'N2', 'H21', 'H22', 'N3', 'C4', "C3'", "H3'", "C2'", "H2'", "O2'", "HO2'", "O3'"],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"N2":{'torsion': 180.0, 'tree': 'B', 'NC': 18, 'NB': 20, 'NA': 22, 'I': 23, 'angle': 116.02, 'blen': 1.34, 'charge': -0.778, 'type': 'N2'},
"O6":{'torsion': 0.0, 'tree': 'E', 'NC': 16, 'NB': 17, 'NA': 18, 'I': 19, 'angle': 128.8, 'blen': 1.23, 'charge': -0.458, 'type': 'O'},
"H5'2":{'torsion': 90.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"H5'1":{'torsion': -30.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"H3'":{'torsion': -320.0, 'tree': 'E', 'NC': 5, 'NB': 8, 'NA': 28, 'I': 29, 'angle': 109.5, 'blen': 1.09, 'charge': 0.007, 'type': 'HC'},
"C4'":{'torsion': -151.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 8, 'angle': 110.0, 'blen': 1.52, 'charge': 0.1, 'type': 'CT'},
"N9":{'torsion': -127.7, 'tree': 'S', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 13, 'angle': 108.06, 'blen': 1.49, 'charge': -0.042, 'type': 'N*'},
"C5'":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.0, 'blen': 1.44, 'charge': 0.18, 'type': 'CT'},
"H22":{'torsion': -179.44, 'tree': 'E', 'NC': 20, 'NB': 22, 'NA': 23, 'I': 25, 'angle': 116.53, 'blen': 1.01, 'charge': 0.339, 'type': 'H2'},
"C1'":{'torsion': 105.6, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 110.04, 'blen': 1.42, 'charge': 0.117, 'type': 'CT'},
"C3'":{'torsion': -329.11, 'tree': 'M', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 28, 'angle': 115.78, 'blen': 1.53, 'charge': 0.303, 'type': 'CT'},
"HO2'":{'torsion': 180.0, 'tree': 'E', 'NC': 28, 'NB': 30, 'NA': 32, 'I': 33, 'angle': 107.0, 'blen': 0.96, 'charge': 0.324, 'type': 'HO'},
"CUT":['0.00000'],
},
"TRP": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HZ2":{'torsion': 0.0, 'tree': 'E', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 18, 'angle': 120.0, 'blen': 1.09, 'charge': 0.084, 'type': 'HC'},
"HZ3":{'torsion': 180.0, 'tree': 'E', 'NC': 17, 'NB': 19, 'NA': 21, 'I': 22, 'angle': 120.0, 'blen': 1.09, 'charge': 0.057, 'type': 'HC'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O'], ['CD1', 'CE2', 'NE1', 'HE1'], ['CE2', 'CH2', 'CZ2', 'HZ2'], ['CZ2', 'CZ3', 'CH2', 'HH2'], ['CH2', 'CE3', 'CZ3', 'HZ3'], ['CZ3', 'CD2', 'CE3', 'HE3']],
"CH2":{'torsion': 180.0, 'tree': 'B', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 19, 'angle': 116.0, 'blen': 1.39, 'charge': -0.077, 'type': 'CA'},
"CZ3":{'torsion': 0.0, 'tree': 'B', 'NC': 16, 'NB': 17, 'NA': 19, 'I': 21, 'angle': 121.0, 'blen': 1.35, 'charge': -0.066, 'type': 'CA'},
"NE1":{'torsion': 180.0, 'tree': 'B', 'NC': 8, 'NB': 11, 'NA': 12, 'I': 14, 'angle': 107.0, 'blen': 1.43, 'charge': -0.352, 'type': 'NA'},
"INTX,KFORM":['INT', '1'],
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"HE3":{'torsion': 180.0, 'tree': 'E', 'NC': 19, 'NB': 21, 'NA': 23, 'I': 24, 'angle': 120.0, 'blen': 1.09, 'charge': 0.086, 'type': 'HC'},
"HD1":{'torsion': 0.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 12, 'I': 13, 'angle': 120.0, 'blen': 1.09, 'charge': 0.093, 'type': 'HC'},
"HE1":{'torsion': 180.0, 'tree': 'E', 'NC': 11, 'NB': 12, 'NA': 14, 'I': 15, 'angle': 125.5, 'blen': 1.01, 'charge': 0.271, 'type': 'H'},
"NAMRES":'TRYPTOPHAN',
"CD1":{'torsion': 180.0, 'tree': 'B', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 127.0, 'blen': 1.34, 'charge': 0.044, 'type': 'CW'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'CD1', 'HD1', 'NE1', 'HE1', 'CE2', 'CZ2', 'HZ2', 'CH2', 'HH2', 'CZ3', 'HZ3', 'CE3', 'HE3', 'CD2', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"CD2":{'torsion': 0.0, 'tree': 'E', 'NC': 19, 'NB': 21, 'NA': 23, 'I': 25, 'angle': 117.0, 'blen': 1.4, 'charge': 0.146, 'type': 'CB'},
"CE2":{'torsion': 0.0, 'tree': 'S', 'NC': 11, 'NB': 12, 'NA': 14, 'I': 16, 'angle': 109.0, 'blen': 1.31, 'charge': 0.154, 'type': 'CN'},
"CE3":{'torsion': 0.0, 'tree': 'B', 'NC': 17, 'NB': 19, 'NA': 21, 'I': 23, 'angle': 122.0, 'blen': 1.41, 'charge': -0.173, 'type': 'CA'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 26, 'I': 27, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CG":{'torsion': 180.0, 'tree': 'S', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 115.0, 'blen': 1.51, 'charge': -0.135, 'type': 'C*'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.098, 'type': 'CT'},
"CZ2":{'torsion': 180.0, 'tree': 'B', 'NC': 12, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 128.0, 'blen': 1.4, 'charge': -0.168, 'type': 'CA'},
"loopList":[['CG', 'CD2'], ['CE2', 'CD2']],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 26, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"HH2":{'torsion': 180.0, 'tree': 'E', 'NC': 16, 'NB': 17, 'NA': 19, 'I': 20, 'angle': 120.0, 'blen': 1.09, 'charge': 0.074, 'type': 'HC'},
},
"THR": { "atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB', 'CG2', 'HG21', 'HG22', 'HG23', 'OG1', 'HG1', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HG23":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 10, 'I': 13, 'angle': 109.5, 'blen': 1.09, 'charge': 0.065, 'type': 'HC'},
"HB":{'torsion': 180.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.082, 'type': 'HC'},
"HG22":{'torsion': 180.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 10, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.065, 'type': 'HC'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': 0.17, 'type': 'CT'},
"HG1":{'torsion': 180.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 14, 'I': 15, 'angle': 109.47, 'blen': 0.96, 'charge': 0.31, 'type': 'HO'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"HG21":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 109.5, 'blen': 1.09, 'charge': 0.065, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"OG1":{'torsion': 60.0, 'tree': 'S', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 14, 'angle': 109.47, 'blen': 1.43, 'charge': -0.55, 'type': 'OH'},
"CG2":{'torsion': 300.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.47, 'blen': 1.525, 'charge': -0.191, 'type': 'CT'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 16, 'I': 17, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 16, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"NAMRES":'THREONINE',
},
"DCYT": { "C4":{'torsion': 0.0, 'tree': 'B', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 18, 'angle': 116.9, 'blen': 1.43, 'charge': 0.935, 'type': 'CA'},
"C5":{'torsion': 180.0, 'tree': 'B', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 16, 'angle': 121.0, 'blen': 1.36, 'charge': -0.576, 'type': 'CM'},
"C6":{'torsion': 81.59, 'tree': 'B', 'NC': 10, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 121.1, 'blen': 1.36, 'charge': 0.185, 'type': 'CM'},
"O3'":{'torsion': -203.47, 'tree': 'M', 'NC': 5, 'NB': 8, 'NA': 25, 'I': 30, 'angle': 116.52, 'blen': 1.42, 'charge': -0.509, 'type': 'OS'},
"H6":{'torsion': 0.0, 'tree': 'E', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.08, 'charge': 0.098, 'type': 'HC'},
"impropTors":[['N1', 'N3', 'C2', 'O2'], ['C4', 'H41', 'N4', 'H42']],
"C2":{'torsion': 0.0, 'tree': 'S', 'NC': 16, 'NB': 18, 'NA': 22, 'I': 23, 'angle': 120.5, 'blen': 1.36, 'charge': 0.859, 'type': 'C'},
"H41":{'torsion': 0.0, 'tree': 'E', 'NC': 16, 'NB': 18, 'NA': 19, 'I': 20, 'angle': 117.7, 'blen': 1.01, 'charge': 0.329, 'type': 'H2'},
"H42":{'torsion': 180.0, 'tree': 'E', 'NC': 16, 'NB': 18, 'NA': 19, 'I': 21, 'angle': 120.27, 'blen': 1.01, 'charge': 0.351, 'type': 'H2'},
"H2'2":{'torsion': -320.0, 'tree': 'E', 'NC': 8, 'NB': 25, 'NA': 27, 'I': 29, 'angle': 109.5, 'blen': 1.09, 'charge': 0.081, 'type': 'HC'},
"O1'":{'torsion': -86.31, 'tree': 'S', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 10, 'angle': 108.86, 'blen': 1.46, 'charge': -0.368, 'type': 'OS'},
"H5":{'torsion': 180.0, 'tree': 'E', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 120.0, 'blen': 1.09, 'charge': 0.153, 'type': 'HC'},
"INTX,KFORM":['INT', '1'],
"O5'":{'torsion': -98.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.6, 'charge': -0.509, 'type': 'OS'},
"N3":{'torsion': 0.0, 'tree': 'S', 'NC': 14, 'NB': 16, 'NA': 18, 'I': 22, 'angle': 121.7, 'blen': 1.33, 'charge': -0.86, 'type': 'NC'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"loopList":[["C1'", "C2'"], ['C2', 'N1']],
"N4":{'torsion': 180.0, 'tree': 'B', 'NC': 14, 'NB': 16, 'NA': 18, 'I': 19, 'angle': 120.1, 'blen': 1.32, 'charge': -0.834, 'type': 'N2'},
"NAMRES":'D-CYTOSINE',
"H4'":{'torsion': -200.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.056, 'type': 'HC'},
"H1'":{'torsion': -240.0, 'tree': 'E', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.009, 'type': 'HC'},
"O2":{'torsion': 180.0, 'tree': 'E', 'NC': 18, 'NB': 22, 'NA': 23, 'I': 24, 'angle': 122.4, 'blen': 1.24, 'charge': -0.508, 'type': 'O'},
"atNameList":["O5'", "C5'", "H5'1", "H5'2", "C4'", "H4'", "O1'", "C1'", "H1'", 'N1', 'C6', 'H6', 'C5', 'H5', 'C4', 'N4', 'H41', 'H42', 'N3', 'C2', 'O2', "C3'", "H3'", "C2'", "H2'1", "H2'2", "O3'"],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"H2'1":{'torsion': -200.0, 'tree': 'E', 'NC': 8, 'NB': 25, 'NA': 27, 'I': 28, 'angle': 109.5, 'blen': 1.09, 'charge': 0.081, 'type': 'HC'},
"N1":{'torsion': -127.7, 'tree': 'S', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 13, 'angle': 108.1, 'blen': 1.49, 'charge': -0.187, 'type': 'N*'},
"C2'":{'torsion': -86.3, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 25, 'I': 27, 'angle': 102.8, 'blen': 1.53, 'charge': -0.307, 'type': 'CT'},
"H5'2":{'torsion': 330.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.021, 'type': 'HC'},
"H5'1":{'torsion': 90.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 109.5, 'blen': 1.09, 'charge': 0.021, 'type': 'HC'},
"H3'":{'torsion': -320.0, 'tree': 'E', 'NC': 5, 'NB': 8, 'NA': 25, 'I': 26, 'angle': 109.5, 'blen': 1.09, 'charge': 0.025, 'type': 'HC'},
"C4'":{'torsion': -151.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 8, 'angle': 110.0, 'blen': 1.52, 'charge': 0.036, 'type': 'CT'},
"C5'":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.0, 'blen': 1.44, 'charge': 0.118, 'type': 'CT'},
"C1'":{'torsion': 105.6, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 110.04, 'blen': 1.42, 'charge': 0.376, 'type': 'CT'},
"C3'":{'torsion': -329.11, 'tree': 'M', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 25, 'angle': 115.78, 'blen': 1.53, 'charge': 0.233, 'type': 'CT'},
"CUT":['0.00000'],
},
"VAL": { "HG22":{'torsion': 180.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 14, 'I': 16, 'angle': 109.5, 'blen': 1.09, 'charge': 0.031, 'type': 'HC'},
"HG23":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 14, 'I': 17, 'angle': 109.5, 'blen': 1.09, 'charge': 0.031, 'type': 'HC'},
"HG21":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 14, 'I': 15, 'angle': 109.5, 'blen': 1.09, 'charge': 0.031, 'type': 'HC'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"HG13":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 10, 'I': 13, 'angle': 109.5, 'blen': 1.09, 'charge': 0.031, 'type': 'HC'},
"HG12":{'torsion': 180.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 10, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.031, 'type': 'HC'},
"HG11":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 109.5, 'blen': 1.09, 'charge': 0.031, 'type': 'HC'},
"INTX,KFORM":['INT', '1'],
"CG2":{'torsion': 180.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 14, 'angle': 109.47, 'blen': 1.525, 'charge': -0.091, 'type': 'CT'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CG1":{'torsion': 60.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.47, 'blen': 1.525, 'charge': -0.091, 'type': 'CT'},
"NAMRES":'VALINE',
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB', 'CG1', 'HG11', 'HG12', 'HG13', 'CG2', 'HG21', 'HG22', 'HG23', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HB":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.024, 'type': 'HC'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 18, 'I': 19, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.012, 'type': 'CT'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 18, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
"WT3": { "HW1":{'torsion': -98.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.0, 'charge': 0.417, 'type': 'HW'},
"INTX,KFORM":['INT', '1'],
"atNameList":['HW1', 'OW', 'HW2'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"loopList":[['HW1', 'HW2']],
"HW2":{'torsion': -151.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 104.52, 'blen': 0.9572, 'charge': 0.417, 'type': 'HW'},
"CUT":['0.00000'],
"OW":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 104.52, 'blen': 0.9572, 'charge': -0.834, 'type': 'OW'},
"NAMRES":'WATER, TIP3P MODEL',
},
"RCYT": { "C4":{'torsion': 0.0, 'tree': 'B', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 18, 'angle': 116.9, 'blen': 1.43, 'charge': 0.935, 'type': 'CA'},
"C5":{'torsion': 180.0, 'tree': 'B', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 16, 'angle': 121.0, 'blen': 1.36, 'charge': -0.576, 'type': 'CM'},
"C6":{'torsion': 81.59, 'tree': 'B', 'NC': 10, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 121.1, 'blen': 1.36, 'charge': 0.185, 'type': 'CM'},
"O3'":{'torsion': -203.47, 'tree': 'M', 'NC': 5, 'NB': 8, 'NA': 25, 'I': 31, 'angle': 116.52, 'blen': 1.42, 'charge': -0.509, 'type': 'OS'},
"H6":{'torsion': 0.0, 'tree': 'E', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.08, 'charge': 0.098, 'type': 'HC'},
"O2'":{'torsion': 240.0, 'tree': 'S', 'NC': 8, 'NB': 25, 'NA': 27, 'I': 29, 'angle': 109.5, 'blen': 1.43, 'charge': -0.546, 'type': 'OH'},
"C2":{'torsion': 0.0, 'tree': 'S', 'NC': 16, 'NB': 18, 'NA': 22, 'I': 23, 'angle': 120.5, 'blen': 1.36, 'charge': 0.859, 'type': 'C'},
"H41":{'torsion': 0.0, 'tree': 'E', 'NC': 16, 'NB': 18, 'NA': 19, 'I': 20, 'angle': 117.7, 'blen': 1.01, 'charge': 0.329, 'type': 'H2'},
"H42":{'torsion': 180.0, 'tree': 'E', 'NC': 16, 'NB': 18, 'NA': 19, 'I': 21, 'angle': 120.27, 'blen': 1.01, 'charge': 0.351, 'type': 'H2'},
"O1'":{'torsion': -86.31, 'tree': 'S', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 10, 'angle': 108.86, 'blen': 1.46, 'charge': -0.343, 'type': 'OS'},
"H5":{'torsion': 180.0, 'tree': 'E', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 120.0, 'blen': 1.09, 'charge': 0.153, 'type': 'HC'},
"INTX,KFORM":['INT', '1'],
"O5'":{'torsion': -98.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.6, 'charge': -0.509, 'type': 'OS'},
"N3":{'torsion': 0.0, 'tree': 'S', 'NC': 14, 'NB': 16, 'NA': 18, 'I': 22, 'angle': 121.7, 'blen': 1.33, 'charge': -0.86, 'type': 'NC'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"loopList":[["C1'", "C2'"], ['C2', 'N1']],
"N4":{'torsion': 180.0, 'tree': 'B', 'NC': 14, 'NB': 16, 'NA': 18, 'I': 19, 'angle': 120.1, 'blen': 1.32, 'charge': -0.834, 'type': 'N2'},
"NAMRES":'R-CYTOSINE',
"H4'":{'torsion': -200.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.061, 'type': 'HC'},
"H1'":{'torsion': -240.0, 'tree': 'E', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.054, 'type': 'HC'},
"O2":{'torsion': 180.0, 'tree': 'E', 'NC': 18, 'NB': 22, 'NA': 23, 'I': 24, 'angle': 122.4, 'blen': 1.24, 'charge': -0.508, 'type': 'O'},
"atNameList":["O5'", "C5'", "H5'1", "H5'2", "C4'", "H4'", "O1'", "C1'", "H1'", 'N1', 'C6', 'H6', 'C5', 'H5', 'C4', 'N4', 'H41', 'H42', 'N3', 'C2', 'O2', "C3'", "H3'", "C2'", "H2'", "O2'", "HO2'", "O3'"],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"H2'":{'torsion': 120.0, 'tree': 'E', 'NC': 8, 'NB': 25, 'NA': 27, 'I': 28, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"N1":{'torsion': -127.7, 'tree': 'S', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 13, 'angle': 108.1, 'blen': 1.49, 'charge': -0.187, 'type': 'N*'},
"C2'":{'torsion': -86.3, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 25, 'I': 27, 'angle': 102.8, 'blen': 1.53, 'charge': 0.101, 'type': 'CT'},
"H5'2":{'torsion': 330.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"H5'1":{'torsion': 90.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 109.5, 'blen': 1.09, 'charge': 0.008, 'type': 'HC'},
"H3'":{'torsion': -320.0, 'tree': 'E', 'NC': 5, 'NB': 8, 'NA': 25, 'I': 26, 'angle': 109.5, 'blen': 1.09, 'charge': 0.007, 'type': 'HC'},
"C4'":{'torsion': -151.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 8, 'angle': 110.0, 'blen': 1.52, 'charge': 0.1, 'type': 'CT'},
"C5'":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.0, 'blen': 1.44, 'charge': 0.18, 'type': 'CT'},
"C1'":{'torsion': 105.6, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 110.04, 'blen': 1.42, 'charge': 0.117, 'type': 'CT'},
"C3'":{'torsion': -329.11, 'tree': 'M', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 25, 'angle': 115.78, 'blen': 1.53, 'charge': 0.303, 'type': 'CT'},
"HO2'":{'torsion': 180.0, 'tree': 'E', 'NC': 25, 'NB': 27, 'NA': 29, 'I': 30, 'angle': 107.0, 'blen': 0.96, 'charge': 0.324, 'type': 'HO'},
"CUT":['0.00000'],
"impropTors":[['N1', 'N3', 'C2', 'O2'], ['C4', 'H41', 'N4', 'H42']],
},
"RPOM": { "INTX,KFORM":['INT', '1'],
"OB":{'torsion': -332.61, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 109.65, 'blen': 1.48, 'charge': -0.847, 'type': 'O2'},
"atNameList":['P', 'OA', 'OB'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CUT":['0.00000'],
"OA":{'torsion': -194.89, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 109.58, 'blen': 1.48, 'charge': -0.847, 'type': 'O2'},
"P":{'torsion': -213.19, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 119.04, 'blen': 1.6, 'charge': 1.385, 'type': 'P'},
"NAMRES":'PHOSMI R',
},
"ASN": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'OD1', 'ND2', 'HD21', 'HD22', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"ND2":{'torsion': 180.0, 'tree': 'B', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 116.6, 'blen': 1.335, 'charge': -0.867, 'type': 'N'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 16, 'I': 17, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.086, 'type': 'CT'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O'], ['CB', 'ND2', 'CG', 'OD1'], ['CG', 'HD21', 'ND2', 'HD22']],
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"CG":{'torsion': 180.0, 'tree': 'B', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 111.1, 'blen': 1.522, 'charge': 0.675, 'type': 'C'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"HD21":{'torsion': 180.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 119.8, 'blen': 1.01, 'charge': 0.344, 'type': 'H'},
"OD1":{'torsion': 0.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 120.5, 'blen': 1.229, 'charge': -0.47, 'type': 'O'},
"HD22":{'torsion': 0.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 13, 'I': 15, 'angle': 119.8, 'blen': 1.01, 'charge': 0.344, 'type': 'H'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 16, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"NAMRES":'ASPARAGINE',
},
"LYH": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.066, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.066, 'type': 'HC'},
"HZ2":{'torsion': 300.0, 'tree': 'E', 'NC': 14, 'NB': 17, 'NA': 20, 'I': 22, 'angle': 109.47, 'blen': 1.01, 'charge': 0.29, 'type': 'H2'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"HZ1":{'torsion': 60.0, 'tree': 'E', 'NC': 14, 'NB': 17, 'NA': 20, 'I': 21, 'angle': 109.47, 'blen': 1.01, 'charge': 0.29, 'type': 'H2'},
"NZ":{'torsion': 180.0, 'tree': 'B', 'NC': 11, 'NB': 14, 'NA': 17, 'I': 20, 'angle': 109.47, 'blen': 1.47, 'charge': -0.8, 'type': 'NT'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 23, 'I': 24, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"INTX,KFORM":['INT', '1'],
"HG3":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 109.5, 'blen': 1.09, 'charge': 0.024, 'type': 'HC'},
"HG2":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.024, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"HE3":{'torsion': 60.0, 'tree': 'E', 'NC': 11, 'NB': 14, 'NA': 17, 'I': 19, 'angle': 109.5, 'blen': 1.09, 'charge': 0.017, 'type': 'HC'},
"HE2":{'torsion': 300.0, 'tree': 'E', 'NC': 11, 'NB': 14, 'NA': 17, 'I': 18, 'angle': 109.5, 'blen': 1.09, 'charge': 0.017, 'type': 'HC'},
"HD2":{'torsion': 300.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 15, 'angle': 109.5, 'blen': 1.09, 'charge': 0.04, 'type': 'HC'},
"HD3":{'torsion': 60.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 16, 'angle': 109.5, 'blen': 1.09, 'charge': 0.04, 'type': 'HC'},
"NAMRES":'LYSINE',
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'HG2', 'HG3', 'CD', 'HD2', 'HD3', 'CE', 'HE2', 'HE3', 'NZ', 'HZ1', 'HZ2', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"CD":{'torsion': 180.0, 'tree': '3', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 14, 'angle': 109.47, 'blen': 1.525, 'charge': -0.064, 'type': 'CT'},
"CE":{'torsion': 180.0, 'tree': '3', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 17, 'angle': 109.47, 'blen': 1.525, 'charge': 0.196, 'type': 'CT'},
"CG":{'torsion': 180.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.525, 'charge': -0.048, 'type': 'CT'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.142, 'type': 'CT'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 23, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
"ASH": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.137, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.137, 'type': 'HC'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'OD1', 'OD2', 'HD', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 15, 'I': 16, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.323, 'type': 'CT'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O'], ['CB', 'OD1', 'CG', 'OD2']],
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"HD":{'torsion': 180.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 109.5, 'blen': 0.96, 'charge': 0.368, 'type': 'HO'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '0'],
"CG":{'torsion': 180.0, 'tree': 'B', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.527, 'charge': 0.803, 'type': 'C'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"OD2":{'torsion': 270.0, 'tree': 'S', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 117.2, 'blen': 1.26, 'charge': -0.63, 'type': 'OH'},
"OD1":{'torsion': 90.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 117.2, 'blen': 1.26, 'charge': -0.476, 'type': 'O'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 15, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"NAMRES":'ASP neutral',
},
"ACE": { "atNameList":['H1', 'CH3', 'H2', 'H3', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"H2":{'torsion': 60.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 109.5, 'blen': 1.09, 'charge': 0.01, 'type': 'HC'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 9, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"impropTors":[['CH3', '+M', 'C', 'O']],
"CH3":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 90.0, 'blen': 1.09, 'charge': -0.142, 'type': 'CT'},
"INTX,KFORM":['INT', '1'],
"H1":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 90.0, 'blen': 1.0, 'charge': 0.01, 'type': 'HC'},
"H3":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.01, 'type': 'HC'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'NOMI', 'DU', 'BEG'],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 8, 'angle': 111.1, 'blen': 1.53, 'charge': 0.616, 'type': 'C'},
"NAMRES":'ACE BEGINNING GROUP',
},
"HIE": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"NE2":{'torsion': 0.0, 'tree': 'B', 'NC': 11, 'NB': 12, 'NA': 13, 'I': 15, 'angle': 109.0, 'blen': 1.31, 'charge': -0.146, 'type': 'NA'},
"ND1":{'torsion': 180.0, 'tree': 'S', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 122.0, 'blen': 1.39, 'charge': -0.502, 'type': 'NB'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O'], ['CE1', 'CD2', 'NE2', 'HE2']],
"CE1":{'torsion': 180.0, 'tree': 'B', 'NC': 8, 'NB': 11, 'NA': 12, 'I': 13, 'angle': 108.0, 'blen': 1.32, 'charge': 0.241, 'type': 'CR'},
"INTX,KFORM":['INT', '1'],
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"HE2":{'torsion': 180.0, 'tree': 'E', 'NC': 12, 'NB': 13, 'NA': 15, 'I': 16, 'angle': 125.0, 'blen': 1.01, 'charge': 0.228, 'type': 'H'},
"HD2":{'torsion': 180.0, 'tree': 'E', 'NC': 13, 'NB': 15, 'NA': 17, 'I': 18, 'angle': 120.0, 'blen': 1.09, 'charge': 0.114, 'type': 'HC'},
"HE1":{'torsion': 180.0, 'tree': 'E', 'NC': 11, 'NB': 12, 'NA': 13, 'I': 14, 'angle': 120.0, 'blen': 1.09, 'charge': 0.036, 'type': 'HC'},
"NAMRES":'HISTIDINE EPSILONH',
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'ND1', 'CE1', 'HE1', 'NE2', 'HE2', 'CD2', 'HD2', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"CD2":{'torsion': 0.0, 'tree': 'S', 'NC': 12, 'NB': 13, 'NA': 15, 'I': 17, 'angle': 110.0, 'blen': 1.36, 'charge': -0.184, 'type': 'CW'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 19, 'I': 20, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CG":{'torsion': 180.0, 'tree': 'S', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 115.0, 'blen': 1.51, 'charge': 0.251, 'type': 'CC'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.098, 'type': 'CT'},
"loopList":[['CG', 'CD2']],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 19, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
"HID": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"NE2":{'torsion': 0.0, 'tree': 'S', 'NC': 11, 'NB': 12, 'NA': 14, 'I': 16, 'angle': 109.0, 'blen': 1.31, 'charge': -0.502, 'type': 'NB'},
"ND1":{'torsion': 180.0, 'tree': 'B', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 122.0, 'blen': 1.39, 'charge': -0.146, 'type': 'NA'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O'], ['CG', 'CE1', 'ND1', 'HD1']],
"CE1":{'torsion': 180.0, 'tree': 'B', 'NC': 8, 'NB': 11, 'NA': 12, 'I': 14, 'angle': 108.0, 'blen': 1.32, 'charge': 0.241, 'type': 'CR'},
"INTX,KFORM":['INT', '1'],
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"HD1":{'torsion': 0.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 12, 'I': 13, 'angle': 126.0, 'blen': 1.01, 'charge': 0.228, 'type': 'H'},
"HD2":{'torsion': 180.0, 'tree': 'E', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 18, 'angle': 120.0, 'blen': 1.09, 'charge': 0.018, 'type': 'HC'},
"HE1":{'torsion': 180.0, 'tree': 'E', 'NC': 11, 'NB': 12, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.09, 'charge': 0.036, 'type': 'HC'},
"NAMRES":'HISTIDINE DELTAH',
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'ND1', 'HD1', 'CE1', 'HE1', 'NE2', 'CD2', 'HD2', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"CD2":{'torsion': 0.0, 'tree': 'S', 'NC': 12, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 110.0, 'blen': 1.36, 'charge': 0.195, 'type': 'CV'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 19, 'I': 20, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CG":{'torsion': 180.0, 'tree': 'S', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 115.0, 'blen': 1.51, 'charge': -0.032, 'type': 'CC'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.098, 'type': 'CT'},
"loopList":[['CG', 'CD2']],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 19, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
"filename":'all.in',
"OHB": { "INTX,KFORM":['INT', '1'],
"atNameList":['H', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"H":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 120.0, 'blen': 1.0, 'charge': 0.243, 'type': 'HO'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'NOMI', 'DU', 'BEG'],
"O":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 120.0, 'blen': 0.96, 'charge': -0.577, 'type': 'OH'},
"CUT":['0.00000'],
"NAMRES":'OH BEGINNING',
},
"CYX": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.0495, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.0495, 'type': 'HC'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'SG', 'LP1', 'LP2', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"SG":{'torsion': 180.0, 'tree': 'B', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 116.0, 'blen': 1.81, 'charge': 0.824, 'type': 'S'},
"LP1":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 96.7, 'blen': 0.679, 'charge': -0.4045, 'type': 'LP'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 14, 'I': 15, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.098, 'type': 'CT'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"LP2":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 96.7, 'blen': 0.679, 'charge': -0.4045, 'type': 'LP'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 14, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"NAMRES":'CYSTINE(S-S BRIDGE)',
},
"PRO": { "HB2":{'torsion': 136.3, 'tree': 'E', 'NC': 5, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.061, 'type': 'HC'},
"HB3":{'torsion': 256.3, 'tree': 'E', 'NC': 5, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 109.5, 'blen': 1.09, 'charge': 0.061, 'type': 'HC'},
"atNameList":['N', 'CD', 'HD2', 'HD3', 'CG', 'HG2', 'HG3', 'CB', 'HB2', 'HB3', 'CA', 'HA', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 120.5, 'blen': 1.229, 'charge': -0.5, 'type': 'O'},
"impropTors":[['CA', '+M', 'C', 'O'], ['-M', 'CA', 'N', 'CD']],
"HG2":{'torsion': 218.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.063, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 117.0, 'blen': 1.337, 'charge': -0.229, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"HG3":{'torsion': 98.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.063, 'type': 'HC'},
"CG":{'torsion': 200.1, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 8, 'angle': 103.2, 'blen': 1.5, 'charge': -0.121, 'type': 'CT'},
"HA":{'torsion': 81.1, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 14, 'I': 15, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"CB":{'torsion': 338.3, 'tree': 'B', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 11, 'angle': 106.0, 'blen': 1.51, 'charge': -0.115, 'type': 'CT'},
"CD":{'torsion': 356.1, 'tree': '3', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 126.1, 'blen': 1.458, 'charge': -0.012, 'type': 'CT'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CA":{'torsion': 175.2, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 14, 'angle': 120.6, 'blen': 1.451, 'charge': 0.035, 'type': 'CT'},
"loopList":[['CB', 'CA']],
"HD2":{'torsion': 80.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 109.5, 'blen': 1.09, 'charge': 0.06, 'type': 'HC'},
"HD3":{'torsion': 320.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.06, 'type': 'HC'},
"CUT":['0.00000'],
"C":{'torsion': 0.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 14, 'I': 16, 'angle': 111.1, 'blen': 1.522, 'charge': 0.526, 'type': 'C'},
"NAMRES":'PROLINE',
},
"LYS": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HZ2":{'torsion': 180.0, 'tree': 'E', 'NC': 14, 'NB': 17, 'NA': 20, 'I': 22, 'angle': 109.47, 'blen': 1.01, 'charge': 0.294, 'type': 'H3'},
"HZ3":{'torsion': 300.0, 'tree': 'E', 'NC': 14, 'NB': 17, 'NA': 20, 'I': 23, 'angle': 109.47, 'blen': 1.01, 'charge': 0.294, 'type': 'H3'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"HZ1":{'torsion': 60.0, 'tree': 'E', 'NC': 14, 'NB': 17, 'NA': 20, 'I': 21, 'angle': 109.47, 'blen': 1.01, 'charge': 0.294, 'type': 'H3'},
"NZ":{'torsion': 180.0, 'tree': '3', 'NC': 11, 'NB': 14, 'NA': 17, 'I': 20, 'angle': 109.47, 'blen': 1.47, 'charge': -0.138, 'type': 'N3'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 24, 'I': 25, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"INTX,KFORM":['INT', '1'],
"HG3":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 109.5, 'blen': 1.09, 'charge': 0.116, 'type': 'HC'},
"HG2":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.116, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"HE3":{'torsion': 60.0, 'tree': 'E', 'NC': 11, 'NB': 14, 'NA': 17, 'I': 19, 'angle': 109.5, 'blen': 1.09, 'charge': 0.098, 'type': 'HC'},
"HE2":{'torsion': 300.0, 'tree': 'E', 'NC': 11, 'NB': 14, 'NA': 17, 'I': 18, 'angle': 109.5, 'blen': 1.09, 'charge': 0.098, 'type': 'HC'},
"HD2":{'torsion': 300.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 15, 'angle': 109.5, 'blen': 1.09, 'charge': 0.122, 'type': 'HC'},
"HD3":{'torsion': 60.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 16, 'angle': 109.5, 'blen': 1.09, 'charge': 0.122, 'type': 'HC'},
"NAMRES":'LYSINE',
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'HG2', 'HG3', 'CD', 'HD2', 'HD3', 'CE', 'HE2', 'HE3', 'NZ', 'HZ1', 'HZ2', 'HZ3', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"CD":{'torsion': 180.0, 'tree': '3', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 14, 'angle': 109.47, 'blen': 1.525, 'charge': -0.18, 'type': 'CT'},
"CE":{'torsion': 180.0, 'tree': '3', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 17, 'angle': 109.47, 'blen': 1.525, 'charge': -0.038, 'type': 'CT'},
"CG":{'torsion': 180.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.525, 'charge': -0.16, 'type': 'CT'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.098, 'type': 'CT'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 24, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
"D2AP": { "C4":{'torsion': 0.0, 'tree': 'E', 'NC': 22, 'NB': 23, 'NA': 25, 'I': 26, 'angle': 110.8, 'blen': 1.35, 'charge': 0.546, 'type': 'CB'},
"C5":{'torsion': 0.0, 'tree': 'S', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 104.0, 'blen': 1.39, 'charge': -0.097, 'type': 'CB'},
"C6":{'torsion': 180.0, 'tree': 'B', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 18, 'angle': 132.42, 'blen': 1.4, 'charge': 0.769, 'type': 'CQ'},
"O3'":{'torsion': -203.47, 'tree': 'M', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 32, 'angle': 116.52, 'blen': 1.42, 'charge': -0.509, 'type': 'OS'},
"impropTors":[['C2', 'H21', 'N2', 'H22']],
"C2":{'torsion': 0.0, 'tree': 'B', 'NC': 17, 'NB': 18, 'NA': 22, 'I': 21, 'angle': 118.8, 'blen': 1.33, 'charge': 0.661, 'type': 'CA'},
"N3":{'torsion': 0.0, 'tree': 'S', 'NC': 18, 'NB': 22, 'NA': 23, 'I': 25, 'angle': 129.17, 'blen': 1.32, 'charge': -0.728, 'type': 'NC'},
"C2'":{'torsion': -86.3, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 29, 'angle': 102.8, 'blen': 1.53, 'charge': -0.307, 'type': 'CT'},
"H2'1":{'torsion': 120.0, 'tree': 'E', 'NC': 8, 'NB': 27, 'NA': 29, 'I': 30, 'angle': 109.5, 'blen': 1.09, 'charge': 0.081, 'type': 'HC'},
"C8":{'torsion': 81.59, 'tree': 'B', 'NC': 10, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 131.2, 'blen': 1.37, 'charge': 0.263, 'type': 'CK'},
"O1'":{'torsion': -86.31, 'tree': 'S', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 10, 'angle': 108.86, 'blen': 1.46, 'charge': -0.368, 'type': 'OS'},
"INTX,KFORM":['INT', '1'],
"H6":{'torsion': 180.0, 'tree': 'E', 'NC': 18, 'NB': 22, 'NA': 23, 'I': 19, 'angle': 120.0, 'blen': 1.08, 'charge': -0.032, 'type': 'HC'},
"N7":{'torsion': 177.0, 'tree': 'S', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 16, 'angle': 113.93, 'blen': 1.3, 'charge': -0.543, 'type': 'NB'},
"H2'2":{'torsion': 240.0, 'tree': 'E', 'NC': 8, 'NB': 27, 'NA': 29, 'I': 31, 'angle': 109.5, 'blen': 1.09, 'charge': 0.081, 'type': 'HC'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"loopList":[["C1'", "C2'"], ['C4', 'C5'], ['C4', 'N9']],
"H21":{'torsion': 0.0, 'tree': 'E', 'NC': 17, 'NB': 18, 'NA': 19, 'I': 23, 'angle': 120.0, 'blen': 1.01, 'charge': 0.324, 'type': 'H2'},
"NAMRES":'2-AMINO PURINE',
"H8":{'torsion': 0.0, 'tree': 'E', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.08, 'charge': 0.062, 'type': 'HC'},
"H4'":{'torsion': -200.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.056, 'type': 'HC'},
"H1'":{'torsion': -240.0, 'tree': 'E', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.009, 'type': 'HC'},
"atNameList":["O5'", "C5'", "H5'1", "H5'2", "C4'", "H4'", "O1'", "C1'", "H1'", 'N9', 'C8', 'H8', 'N7', 'C5', 'C6', 'H6', 'N1', 'C2', 'N2', 'H21', 'H22', 'N3', 'C4', "C3'", "H3'", "C2'", "H2'1", "H2'2", "O3'"],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"N2":{'torsion': 0.0, 'tree': 'B', 'NC': 16, 'NB': 17, 'NA': 18, 'I': 22, 'angle': 123.5, 'blen': 1.34, 'charge': -0.768, 'type': 'N2'},
"N1":{'torsion': 180.0, 'tree': 'S', 'NC': 16, 'NB': 17, 'NA': 18, 'I': 20, 'angle': 117.43, 'blen': 1.34, 'charge': -0.774, 'type': 'NC'},
"H5'2":{'torsion': 90.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.021, 'type': 'HC'},
"H5'1":{'torsion': -30.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 109.5, 'blen': 1.09, 'charge': 0.021, 'type': 'HC'},
"H3'":{'torsion': 30.0, 'tree': 'E', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 28, 'angle': 109.5, 'blen': 1.09, 'charge': 0.025, 'type': 'HC'},
"C4'":{'torsion': -151.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 8, 'angle': 110.0, 'blen': 1.52, 'charge': 0.036, 'type': 'CT'},
"N9":{'torsion': -127.7, 'tree': 'S', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 13, 'angle': 109.59, 'blen': 1.52, 'charge': -0.073, 'type': 'N*'},
"C5'":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.0, 'blen': 1.44, 'charge': 0.118, 'type': 'CT'},
"H22":{'torsion': 180.0, 'tree': 'E', 'NC': 17, 'NB': 18, 'NA': 19, 'I': 24, 'angle': 120.0, 'blen': 1.01, 'charge': 0.335, 'type': 'H2'},
"C1'":{'torsion': 105.6, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 110.04, 'blen': 1.42, 'charge': 0.376, 'type': 'CT'},
"O5'":{'torsion': -98.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.6, 'charge': -0.509, 'type': 'OS'},
"C3'":{'torsion': -329.11, 'tree': 'M', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 27, 'angle': 115.78, 'blen': 1.53, 'charge': 0.233, 'type': 'CT'},
"CUT":['0.00000'],
},
"ASP": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.071, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.071, 'type': 'HC'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'OD1', 'OD2', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 14, 'I': 15, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.398, 'type': 'CT'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O'], ['CB', 'OD1', 'CG', 'OD2']],
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"CG":{'torsion': 180.0, 'tree': 'B', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.527, 'charge': 0.714, 'type': 'C'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"OD2":{'torsion': 270.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 117.2, 'blen': 1.26, 'charge': -0.721, 'type': 'O2'},
"OD1":{'torsion': 90.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 117.2, 'blen': 1.26, 'charge': -0.721, 'type': 'O2'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 14, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"NAMRES":'ASPARTIC ACID',
},
"CYS": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HG":{'torsion': 180.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 96.0, 'blen': 1.33, 'charge': 0.135, 'type': 'HS'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'SG', 'HG', 'LP1', 'LP2', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"SG":{'torsion': 180.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 116.0, 'blen': 1.81, 'charge': 0.827, 'type': 'SH'},
"LP1":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 96.7, 'blen': 0.679, 'charge': -0.481, 'type': 'LP'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 15, 'I': 16, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.06, 'type': 'CT'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"LP2":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 14, 'angle': 96.7, 'blen': 0.679, 'charge': -0.481, 'type': 'LP'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 15, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"NAMRES":'CYSTEINE',
},
"HE": { "INTX,KFORM":['INT', '1'],
"atNameList":['H'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"H":{'torsion': -211.37, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 114.97, 'blen': 0.96, 'charge': 0.306, 'type': 'HO'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CUT":['0.00000'],
"NAMRES":'H END',
},
"POHE": { "INTX,KFORM":['INT', '1'],
"atNameList":['O', 'H'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"H":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 107.0, 'blen': 0.96, 'charge': 0.38, 'type': 'HO'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"O":{'torsion': 180.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 113.0, 'blen': 1.36, 'charge': -0.38, 'type': 'OH'},
"CUT":['0.00000'],
"NAMRES":'PROTEIN-OH END',
},
"GLU": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.092, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.092, 'type': 'HC'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'HG2', 'HG3', 'CD', 'OE1', 'OE2', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O'], ['CG', 'OE1', 'CD', 'OE2']],
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 17, 'I': 18, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CD":{'torsion': 180.0, 'tree': 'B', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 14, 'angle': 109.47, 'blen': 1.527, 'charge': 0.714, 'type': 'C'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.184, 'type': 'CT'},
"HG2":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.071, 'type': 'HC'},
"OE1":{'torsion': 90.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 15, 'angle': 117.2, 'blen': 1.26, 'charge': -0.721, 'type': 'O2'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"HG3":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 109.5, 'blen': 1.09, 'charge': 0.071, 'type': 'HC'},
"CG":{'torsion': 180.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.51, 'charge': -0.398, 'type': 'CT'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 17, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"OE2":{'torsion': 270.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 16, 'angle': 117.2, 'blen': 1.26, 'charge': -0.721, 'type': 'O2'},
"NAMRES":'GLUTAMIC ACID',
},
"HB": { "INTX,KFORM":['INT', '1'],
"atNameList":['H'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"H":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 120.0, 'blen': 1.0, 'charge': 0.279, 'type': 'HO'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'NOMI', 'DU', 'BEG'],
"CUT":['0.00000'],
"NAMRES":'H BEGINNING',
},
"DADE": { "H2":{'torsion': 180.0, 'tree': 'E', 'NC': 18, 'NB': 22, 'NA': 23, 'I': 24, 'angle': 120.0, 'blen': 1.08, 'charge': -0.032, 'type': 'HC'},
"C5":{'torsion': 0.0, 'tree': 'S', 'NC': 13, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 104.0, 'blen': 1.39, 'charge': -0.097, 'type': 'CB'},
"C6":{'torsion': 180.0, 'tree': 'B', 'NC': 14, 'NB': 16, 'NA': 17, 'I': 18, 'angle': 132.42, 'blen': 1.4, 'charge': 0.769, 'type': 'CA'},
"O3'":{'torsion': -203.47, 'tree': 'M', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 32, 'angle': 116.52, 'blen': 1.42, 'charge': -0.509, 'type': 'OS'},
"H62":{'torsion': 180.0, 'tree': 'E', 'NC': 17, 'NB': 18, 'NA': 19, 'I': 21, 'angle': 120.0, 'blen': 1.01, 'charge': 0.335, 'type': 'H2'},
"impropTors":[['C6', 'H61', 'N6', 'H62']],
"C2":{'torsion': 0.0, 'tree': 'B', 'NC': 17, 'NB': 18, 'NA': 22, 'I': 23, 'angle': 118.8, 'blen': 1.33, 'charge': 0.661, 'type': 'CQ'},
"H61":{'torsion': 0.0, 'tree': 'E', 'NC': 17, 'NB': 18, 'NA': 19, 'I': 20, 'angle': 120.0, 'blen': 1.01, 'charge': 0.324, 'type': 'H2'},
"C2'":{'torsion': -86.3, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 29, 'angle': 102.8, 'blen': 1.53, 'charge': -0.307, 'type': 'CT'},
"H2'1":{'torsion': 120.0, 'tree': 'E', 'NC': 8, 'NB': 27, 'NA': 29, 'I': 30, 'angle': 109.5, 'blen': 1.09, 'charge': 0.081, 'type': 'HC'},
"C8":{'torsion': 81.59, 'tree': 'B', 'NC': 10, 'NB': 11, 'NA': 13, 'I': 14, 'angle': 131.2, 'blen': 1.37, 'charge': 0.263, 'type': 'CK'},
"O1'":{'torsion': -86.31, 'tree': 'S', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 10, 'angle': 108.86, 'blen': 1.46, 'charge': -0.368, 'type': 'OS'},
"INTX,KFORM":['INT', '1'],
"O5'":{'torsion': -98.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.6, 'charge': -0.509, 'type': 'OS'},
"N7":{'torsion': 177.0, 'tree': 'S', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 16, 'angle': 113.93, 'blen': 1.3, 'charge': -0.543, 'type': 'NB'},
"H2'2":{'torsion': 240.0, 'tree': 'E', 'NC': 8, 'NB': 27, 'NA': 29, 'I': 31, 'angle': 109.5, 'blen': 1.09, 'charge': 0.081, 'type': 'HC'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"C4":{'torsion': 0.0, 'tree': 'E', 'NC': 22, 'NB': 23, 'NA': 25, 'I': 26, 'angle': 110.8, 'blen': 1.35, 'charge': 0.546, 'type': 'CB'},
"loopList":[["C1'", "C2'"], ['C4', 'C5'], ['C4', 'N9']],
"H3'":{'torsion': 30.0, 'tree': 'E', 'NC': 5, 'NB': 8, 'NA': 27, 'I': 28, 'angle': 109.5, 'blen': 1.09, 'charge': 0.025, 'type': 'HC'},
"NAMRES":'D-ADENOSINE',
"H8":{'torsion': 0.0, 'tree': 'E', 'NC': 11, 'NB': 13, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.08, 'charge': 0.062, 'type': 'HC'},
"H4'":{'torsion': -200.0, 'tree': 'E', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.056, 'type': 'HC'},
"H1'":{'torsion': -240.0, 'tree': 'E', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.009, 'type': 'HC'},
"N6":{'torsion': 0.0, 'tree': 'B', 'NC': 16, 'NB': 17, 'NA': 18, 'I': 19, 'angle': 123.5, 'blen': 1.34, 'charge': -0.768, 'type': 'N2'},
"atNameList":["O5'", "C5'", "H5'1", "H5'2", "C4'", "H4'", "O1'", "C1'", "H1'", 'N9', 'C8', 'H8', 'N7', 'C5', 'C6', 'N6', 'H61', 'H62', 'N1', 'C2', 'H2', 'N3', 'C4', "C3'", "H3'", "C2'", "H2'1", "H2'2", "O3'"],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"N1":{'torsion': 180.0, 'tree': 'S', 'NC': 16, 'NB': 17, 'NA': 18, 'I': 22, 'angle': 117.43, 'blen': 1.34, 'charge': -0.774, 'type': 'NC'},
"H5'2":{'torsion': 90.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.021, 'type': 'HC'},
"H5'1":{'torsion': -30.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 6, 'angle': 109.5, 'blen': 1.09, 'charge': 0.021, 'type': 'HC'},
"C4'":{'torsion': -151.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 5, 'I': 8, 'angle': 110.0, 'blen': 1.52, 'charge': 0.036, 'type': 'CT'},
"N9":{'torsion': -127.7, 'tree': 'S', 'NC': 8, 'NB': 10, 'NA': 11, 'I': 13, 'angle': 109.59, 'blen': 1.52, 'charge': -0.073, 'type': 'N*'},
"C5'":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.0, 'blen': 1.44, 'charge': 0.118, 'type': 'CT'},
"C1'":{'torsion': 105.6, 'tree': 'B', 'NC': 5, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 110.04, 'blen': 1.42, 'charge': 0.376, 'type': 'CT'},
"C3'":{'torsion': -329.11, 'tree': 'M', 'NC': 4, 'NB': 5, 'NA': 8, 'I': 27, 'angle': 115.78, 'blen': 1.53, 'charge': 0.233, 'type': 'CT'},
"N3":{'torsion': 0.0, 'tree': 'S', 'NC': 18, 'NB': 22, 'NA': 23, 'I': 25, 'angle': 129.17, 'blen': 1.32, 'charge': -0.728, 'type': 'NC'},
"CUT":['0.00000'],
},
"GLY": { "HA3":{'torsion': 60.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 109.5, 'blen': 1.09, 'charge': 0.032, 'type': 'HC'},
"atNameList":['N', 'H', 'CA', 'HA2', 'HA3', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HA2":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.032, 'type': 'HC'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 9, 'I': 10, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"INTX,KFORM":['INT', '1'],
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 9, 'angle': 110.4, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"NAMRES":'GLYCINE',
},
"HIP": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.086, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.086, 'type': 'HC'},
"NE2":{'torsion': 0.0, 'tree': 'B', 'NC': 11, 'NB': 12, 'NA': 14, 'I': 16, 'angle': 109.0, 'blen': 1.31, 'charge': -0.058, 'type': 'NA'},
"ND1":{'torsion': 180.0, 'tree': 'B', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 122.0, 'blen': 1.39, 'charge': -0.058, 'type': 'NA'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O'], ['CG', 'CE1', 'ND1', 'HD1'], ['CE1', 'CD2', 'NE2', 'HE2']],
"CE1":{'torsion': 180.0, 'tree': 'B', 'NC': 8, 'NB': 11, 'NA': 12, 'I': 14, 'angle': 108.0, 'blen': 1.32, 'charge': 0.114, 'type': 'CR'},
"INTX,KFORM":['INT', '1'],
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"HD1":{'torsion': 0.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 12, 'I': 13, 'angle': 126.0, 'blen': 1.01, 'charge': 0.306, 'type': 'H'},
"HD2":{'torsion': 180.0, 'tree': 'E', 'NC': 14, 'NB': 16, 'NA': 18, 'I': 19, 'angle': 120.0, 'blen': 1.09, 'charge': 0.153, 'type': 'HC'},
"HE1":{'torsion': 180.0, 'tree': 'E', 'NC': 11, 'NB': 12, 'NA': 14, 'I': 15, 'angle': 120.0, 'blen': 1.09, 'charge': 0.158, 'type': 'HC'},
"HE2":{'torsion': 180.0, 'tree': 'E', 'NC': 12, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 125.0, 'blen': 1.01, 'charge': 0.306, 'type': 'H'},
"NAMRES":'HISTIDINE PLUS',
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'ND1', 'HD1', 'CE1', 'HE1', 'NE2', 'HE2', 'CD2', 'HD2', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"CD2":{'torsion': 0.0, 'tree': 'S', 'NC': 12, 'NB': 14, 'NA': 16, 'I': 18, 'angle': 110.0, 'blen': 1.36, 'charge': -0.037, 'type': 'CW'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 20, 'I': 21, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CG":{'torsion': 180.0, 'tree': 'S', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 115.0, 'blen': 1.51, 'charge': 0.058, 'type': 'CC'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.098, 'type': 'CT'},
"loopList":[['CG', 'CD2']],
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 20, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
"GLN": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.038, 'type': 'HC'},
"NE2":{'torsion': 180.0, 'tree': 'B', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 16, 'angle': 116.6, 'blen': 1.335, 'charge': -0.867, 'type': 'N'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O'], ['CG', 'NE2', 'CD', 'OE1'], ['CD', 'HE21', 'NE2', 'HE22']],
"INTX,KFORM":['INT', '1'],
"HG3":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 109.5, 'blen': 1.09, 'charge': 0.057, 'type': 'HC'},
"HG2":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.057, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"NAMRES":'GLUTAMINE',
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'HG2', 'HG3', 'CD', 'OE1', 'NE2', 'HE21', 'HE22', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HE21":{'torsion': 180.0, 'tree': 'E', 'NC': 11, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 119.8, 'blen': 1.01, 'charge': 0.344, 'type': 'H'},
"HE22":{'torsion': 0.0, 'tree': 'E', 'NC': 11, 'NB': 14, 'NA': 16, 'I': 18, 'angle': 119.8, 'blen': 1.01, 'charge': 0.344, 'type': 'H'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"CD":{'torsion': 180.0, 'tree': 'B', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 14, 'angle': 111.1, 'blen': 1.522, 'charge': 0.675, 'type': 'C'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 19, 'I': 20, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CG":{'torsion': 180.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.525, 'charge': -0.102, 'type': 'CT'},
"OE1":{'torsion': 0.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 15, 'angle': 120.5, 'blen': 1.229, 'charge': -0.47, 'type': 'O'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.098, 'type': 'CT'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 19, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
"ILE": { "HG22":{'torsion': 180.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 10, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.029, 'type': 'HC'},
"HG23":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 10, 'I': 13, 'angle': 109.5, 'blen': 1.09, 'charge': 0.029, 'type': 'HC'},
"HG21":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 10, 'I': 11, 'angle': 109.5, 'blen': 1.09, 'charge': 0.029, 'type': 'HC'},
"HD13":{'torsion': 300.0, 'tree': 'E', 'NC': 8, 'NB': 14, 'NA': 17, 'I': 20, 'angle': 109.5, 'blen': 1.09, 'charge': 0.028, 'type': 'HC'},
"HG13":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 14, 'I': 16, 'angle': 109.5, 'blen': 1.09, 'charge': 0.027, 'type': 'HC'},
"HG12":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 14, 'I': 15, 'angle': 109.5, 'blen': 1.09, 'charge': 0.027, 'type': 'HC'},
"INTX,KFORM":['INT', '1'],
"CG2":{'torsion': 60.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.47, 'blen': 1.525, 'charge': -0.085, 'type': 'CT'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CG1":{'torsion': 180.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 14, 'angle': 109.47, 'blen': 1.525, 'charge': -0.049, 'type': 'CT'},
"NAMRES":'ISOLEUCINE',
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB', 'CG2', 'HG21', 'HG22', 'HG23', 'CG1', 'HG12', 'HG13', 'CD1', 'HD11', 'HD12', 'HD13', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HD11":{'torsion': 60.0, 'tree': 'E', 'NC': 8, 'NB': 14, 'NA': 17, 'I': 18, 'angle': 109.5, 'blen': 1.09, 'charge': 0.028, 'type': 'HC'},
"HD12":{'torsion': 180.0, 'tree': 'E', 'NC': 8, 'NB': 14, 'NA': 17, 'I': 19, 'angle': 109.5, 'blen': 1.09, 'charge': 0.028, 'type': 'HC'},
"HB":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.022, 'type': 'HC'},
"CD1":{'torsion': 180.0, 'tree': '3', 'NC': 6, 'NB': 8, 'NA': 14, 'I': 17, 'angle': 109.47, 'blen': 1.525, 'charge': -0.085, 'type': 'CT'},
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 21, 'I': 22, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 109.47, 'blen': 1.525, 'charge': -0.012, 'type': 'CT'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 21, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O']],
},
"DOHE": { "INTX,KFORM":['INT', '1'],
"atNameList":['O', 'H'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.000', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.000', '90.000', '0.000', '0.00000']],
"H":{'torsion': -39.22, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.0, 'blen': 0.96, 'charge': 0.226, 'type': 'HO'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"O":{'torsion': -98.89, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 101.43, 'blen': 1.6, 'charge': -0.621, 'type': 'OH'},
"CUT":['0.00000'],
"NAMRES":'D-OH END',
},
"GLH": { "HB2":{'torsion': 300.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 9, 'angle': 109.5, 'blen': 1.09, 'charge': 0.092, 'type': 'HC'},
"HB3":{'torsion': 60.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 10, 'angle': 109.5, 'blen': 1.09, 'charge': 0.092, 'type': 'HC'},
"impropTors":[['-M', 'CA', 'N', 'H'], ['CA', '+M', 'C', 'O'], ['CG', 'OE1', 'CD', 'OE2']],
"INTX,KFORM":['INT', '0'],
"HG3":{'torsion': 60.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 13, 'angle': 109.5, 'blen': 1.09, 'charge': 0.137, 'type': 'HC'},
"HG2":{'torsion': 300.0, 'tree': 'E', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 12, 'angle': 109.5, 'blen': 1.09, 'charge': 0.137, 'type': 'HC'},
"N":{'torsion': 180.0, 'tree': 'M', 'NC': 1, 'NB': 2, 'NA': 3, 'I': 4, 'angle': 116.6, 'blen': 1.335, 'charge': -0.463, 'type': 'N'},
"IFIXC,IOMIT,ISYMDU,IPOS":['CORR', 'OMIT', 'DU', 'BEG'],
"CA":{'torsion': 180.0, 'tree': 'M', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 6, 'angle': 121.9, 'blen': 1.449, 'charge': 0.035, 'type': 'CT'},
"OE2":{'torsion': 270.0, 'tree': 'S', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 16, 'angle': 117.2, 'blen': 1.26, 'charge': -0.63, 'type': 'OH'},
"NAMRES":'GLU neutral',
"HE":{'torsion': 180.0, 'tree': 'E', 'NC': 11, 'NB': 14, 'NA': 16, 'I': 17, 'angle': 109.5, 'blen': 0.96, 'charge': 0.368, 'type': 'HO'},
"atNameList":['N', 'H', 'CA', 'HA', 'CB', 'HB2', 'HB3', 'CG', 'HG2', 'HG3', 'CD', 'OE1', 'OE2', 'HE', 'C', 'O'],
"DUMM":[['1', 'DUMM', 'DU', 'M', '0', '-1', '-2', '0.000', '0.000', '0.000', '0.00000'], ['2', 'DUMM', 'DU', 'M', '1', '0', '-1', '1.449', '0.000', '0.000', '0.00000'], ['3', 'DUMM', 'DU', 'M', '2', '1', '0', '1.522', '111.100', '0.000', '0.00000']],
"HA":{'torsion': 300.0, 'tree': 'E', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 7, 'angle': 109.5, 'blen': 1.09, 'charge': 0.048, 'type': 'HC'},
"H":{'torsion': 0.0, 'tree': 'E', 'NC': 2, 'NB': 3, 'NA': 4, 'I': 5, 'angle': 119.8, 'blen': 1.01, 'charge': 0.252, 'type': 'H'},
"CD":{'torsion': 180.0, 'tree': 'B', 'NC': 6, 'NB': 8, 'NA': 11, 'I': 14, 'angle': 109.47, 'blen': 1.527, 'charge': 0.803, 'type': 'C'},
"O":{'torsion': 0.0, 'tree': 'E', 'NC': 4, 'NB': 6, 'NA': 18, 'I': 19, 'angle': 120.5, 'blen': 1.229, 'charge': -0.504, 'type': 'O'},
"CG":{'torsion': 180.0, 'tree': '3', 'NC': 4, 'NB': 6, 'NA': 8, 'I': 11, 'angle': 109.47, 'blen': 1.51, 'charge': -0.323, 'type': 'CT'},
"OE1":{'torsion': 90.0, 'tree': 'E', 'NC': 8, 'NB': 11, 'NA': 14, 'I': 15, 'angle': 117.2, 'blen': 1.26, 'charge': -0.476, 'type': 'O'},
"CB":{'torsion': 60.0, 'tree': '3', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 8, 'angle': 111.1, 'blen': 1.525, 'charge': -0.184, 'type': 'CT'},
"CUT":['0.00000'],
"C":{'torsion': 180.0, 'tree': 'M', 'NC': 3, 'NB': 4, 'NA': 6, 'I': 18, 'angle': 111.1, 'blen': 1.522, 'charge': 0.616, 'type': 'C'},
},
} | 114.181572 | 251 | 0.445138 | 22,347 | 126,399 | 2.517743 | 0.02676 | 0.089204 | 0.04566 | 0.050334 | 0.956562 | 0.948475 | 0.938522 | 0.933741 | 0.922988 | 0.913071 | 0 | 0.182898 | 0.147509 | 126,399 | 1,107 | 252 | 114.181572 | 0.339254 | 0 | 0 | 0.506775 | 0 | 0 | 0.338576 | 0.008552 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
90389ba4241b8df199cb91b6e902706b8c3a733c | 157 | py | Python | alphabot-ppl/alphabot_exceptions.py | tararouras/alphabot-ppl | 4ece4d7ced3afda506ad26276671d4d9790ac164 | [
"MIT"
] | 9 | 2019-01-16T13:48:44.000Z | 2021-05-06T14:11:23.000Z | alphabot-ppl/alphabot_exceptions.py | tararouras/alphabot-ppl | 4ece4d7ced3afda506ad26276671d4d9790ac164 | [
"MIT"
] | 1 | 2019-01-16T13:49:20.000Z | 2019-01-16T13:49:20.000Z | alphabot-ppl/alphabot_exceptions.py | tararouras/alphabot-ppl | 4ece4d7ced3afda506ad26276671d4d9790ac164 | [
"MIT"
] | 3 | 2019-01-30T17:23:35.000Z | 2019-07-21T12:55:00.000Z | class BeaconNotFoundError(Exception):
pass
class BeaconNotValidError(Exception):
pass
class InsufficientLocalizationInfoError(Exception):
pass
| 17.444444 | 51 | 0.796178 | 12 | 157 | 10.416667 | 0.5 | 0.312 | 0.288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146497 | 157 | 8 | 52 | 19.625 | 0.932836 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
5f626fd79e04866d1ef1cf8563b87160d62ebeb9 | 68,594 | py | Python | benchmarks/SimResults/combinations_spec_heteroFair/cmp_astarlbmtontoh264ref/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_heteroFair/cmp_astarlbmtontoh264ref/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_heteroFair/cmp_astarlbmtontoh264ref/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0738199,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.26067,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.438159,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.42826,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.741591,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.425323,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.59517,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.35614,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 6.33991,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0827777,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0155248,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.138294,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.114815,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.221072,
'Execution Unit/Register Files/Runtime Dynamic': 0.13034,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.354398,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.953033,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 3.34459,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00177072,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00177072,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00154811,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000602473,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00164933,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00673888,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.01677,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.110375,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.290041,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.374882,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.798806,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0728432,
'L2/Runtime Dynamic': 0.0363727,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.9272,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.85472,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.119383,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.119383,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.49325,
'Load Store Unit/Runtime Dynamic': 2.56285,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.294378,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.588756,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.104476,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.105517,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0477031,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.739112,
'Memory Management Unit/Runtime Dynamic': 0.15322,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 26.1755,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.288792,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.025374,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.221056,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 0.535222,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 7.43107,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0504615,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.242323,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.339832,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.164813,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.265838,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.134186,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.564837,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.136397,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.64992,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0642015,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00691301,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0661441,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.051126,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.130346,
'Execution Unit/Register Files/Runtime Dynamic': 0.058039,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.151965,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.38403,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.65461,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000735891,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000735891,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000663488,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000269168,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000734428,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0028697,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00625076,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0491487,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 3.12628,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.115666,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.166931,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 5.49652,
'Instruction Fetch Unit/Runtime Dynamic': 0.340866,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.061494,
'L2/Runtime Dynamic': 0.0345681,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.6442,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.756911,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0455224,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0455225,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.85917,
'Load Store Unit/Runtime Dynamic': 1.02693,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.112251,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.224502,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0398381,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0407436,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.194381,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0190153,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.418924,
'Memory Management Unit/Runtime Dynamic': 0.0597589,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 17.0755,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.168885,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00949122,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0810441,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.25942,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 3.37615,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0508706,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.242645,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.271994,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.190433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.307161,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.155044,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.652638,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.176099,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.64827,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0513855,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0079876,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0769161,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0590732,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.128302,
'Execution Unit/Register Files/Runtime Dynamic': 0.0670608,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.174761,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.449443,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.81716,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00111812,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00111812,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00100997,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000410713,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000848591,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00409479,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00943095,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0567886,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 3.61224,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.150668,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.19288,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 6.00606,
'Instruction Fetch Unit/Runtime Dynamic': 0.413862,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0325512,
'L2/Runtime Dynamic': 0.0148739,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.127,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.941026,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0611422,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0611421,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.41573,
'Load Store Unit/Runtime Dynamic': 1.3037,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.150766,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.301532,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0535075,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0539555,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.224596,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0248203,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.472621,
'Memory Management Unit/Runtime Dynamic': 0.0787759,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 18.1647,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.135172,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0102368,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0952826,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.240692,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 3.86907,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.017627,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.216534,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0855582,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.238796,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.385169,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.194421,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.818386,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.259997,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.48799,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0161638,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0100162,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0794208,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0740758,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0955846,
'Execution Unit/Register Files/Runtime Dynamic': 0.084092,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.171725,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.5036,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.02799,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00130767,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00130767,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00116785,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.00046788,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00106411,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0048473,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0115065,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.071211,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.52963,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.195208,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.241865,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 6.96798,
'Instruction Fetch Unit/Runtime Dynamic': 0.524637,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0181281,
'L2/Runtime Dynamic': 0.00537116,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.82885,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.25269,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0838487,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0838487,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 4.2248,
'Load Store Unit/Runtime Dynamic': 1.75005,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.206757,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.413513,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0733786,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0736115,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.281636,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0321179,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.563796,
'Memory Management Unit/Runtime Dynamic': 0.105729,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 19.8522,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0425192,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0112913,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.124262,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.178073,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 4.59185,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 5.577454440221784,
'Runtime Dynamic': 5.577454440221784,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.231032,
'Runtime Dynamic': 0.159535,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 81.499,
'Peak Power': 114.611,
'Runtime Dynamic': 19.4277,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 81.2679,
'Total Cores/Runtime Dynamic': 19.2681,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.231032,
'Total L3s/Runtime Dynamic': 0.159535,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.04814 | 124 | 0.682013 | 8,082 | 68,594 | 5.78248 | 0.067681 | 0.123593 | 0.11298 | 0.093465 | 0.939402 | 0.931527 | 0.918132 | 0.888111 | 0.863076 | 0.84247 | 0 | 0.131703 | 0.224378 | 68,594 | 914 | 125 | 75.04814 | 0.746706 | 0 | 0 | 0.642232 | 0 | 0 | 0.657555 | 0.048108 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5fcb1ecda987e83d2aa920670dd0b29303b4136f | 3,738 | py | Python | utils/PatchingUnpatching.py | thomaskuestner/CNNArt | c2fc639dd2ce035f6ca90113290682a0ccd26fb8 | [
"Apache-2.0"
] | 22 | 2018-04-27T21:28:46.000Z | 2021-12-24T06:44:55.000Z | utils/PatchingUnpatching.py | thomaskuestner/CNNArt | c2fc639dd2ce035f6ca90113290682a0ccd26fb8 | [
"Apache-2.0"
] | 81 | 2017-11-09T17:23:15.000Z | 2020-01-28T22:54:13.000Z | utils/PatchingUnpatching.py | thomaskuestner/CNNArt | c2fc639dd2ce035f6ca90113290682a0ccd26fb8 | [
"Apache-2.0"
] | 18 | 2017-11-13T16:12:17.000Z | 2020-08-27T10:17:34.000Z | def fPatch(imgIn, patch_size=[40, 40, 10], stride=10):
iSize = imgIn.shape
if len(iSize) < 3:
l2Dimg = True
iSize = iSize + (1,)
else:
l2Dimg = False
if len(patch_size) < 3:
l2DPatching = True
else:
l2DPatching = False
patches = []
if l2DPatching: # 2D patching on a 2D/3D image
for h in range(0, iSize[2]):
for i in range(0, iSize[0] - patch_size[0] + 1, stride):
for j in range(0, iSize[1] - patch_size[1] + 1, stride):
if l2Dimg:
x = imgIn[i:i + patch_size[0], j:j + patch_size[1]]
else:
x = imgIn[i:i + patch_size[0], j:j + patch_size[1], h]
patches.append(x)
else: # 3D patching on a 2D/3D image
for h in range(0, iSize[2] - patch_size[2] + 1, stride):
for i in range(0, iSize[0] - patch_size[0] + 1, stride):
for j in range(0, iSize[1] - patch_size[1] + 1, stride):
if l2Dimg:
x = imgIn[i:i + patch_size[0], j:j + patch_size[1]]
else:
x = imgIn[i:i + patch_size[0], j:j + patch_size[1], h:h+patch_size[2]]
patches.append(x)
return np.array(patches, dtype=imgIn.dtype), iSize
def fUnpatch(patchesIn, iSize, patch_size=[40, 40, 10], stride=10, overlap_mode='avg'):
if len(iSize) < 3:
l2Dimg = True
iSize = iSize + (1,)
else:
l2Dimg = False
if len(patch_size) < 3:
l2DPatching = True
else:
l2DPatching = False
img = np.zeros(iSize, dtype=patchesIn.dtype)
iScale = np.zeros(iSize, dtype=patchesIn.dtype)
iCnt = 0
if l2DPatching: # 2D patching on a 2D/3D image
for h in range(0, iSize[2]):
for i in range(0, iSize[0] - patch_size[0] + 1, stride):
for j in range(0, iSize[1] - patch_size[1] + 1, stride):
#lMask = np.zeros(iSize)
#lMask[i:i + patch_size, j:j + patch_size, h] = 1
#lMask = lMask == 1
# TODO: should work for both 2D and 3D image since last dimension is appended -> crop it away later?
if overlap_mode == 'avg':
img[i:i + patch_size[0], j:j + patch_size[1], h] += patchesIn[iCnt, :, :]
else:
img[i:i + patch_size[0], j:j + patch_size[1], h] = patchesIn[iCnt, :, :]
iScale[i:i + patch_size[0], j:j + patch_size[1], h] += 1
#iScale[lMask] = iScale[lMask] + 1
#iScale[i:i + patch_size, j:j + patch_size, h] = float(min(i + 1, patch_size, iSize[0] - i) * min(j + 1, patch_size, iSize[1] - j))
iCnt += 1
else:
for h in range(0, iSize[2] - patch_size[2] + 1, stride):
for i in range(0, iSize[0] - patch_size[0] + 1, stride):
for j in range(0, iSize[1] - patch_size[1] + 1, stride):
# TODO: should work for both 2D and 3D image since last dimension is appended -> crop it away later?
if overlap_mode == 'avg':
img[i:i + patch_size[0], j:j + patch_size[1], h:h + patch_size[2]] += patchesIn[iCnt, :, :]
else:
img[i:i + patch_size[0], j:j + patch_size[1], h:h + patch_size[2]] = patchesIn[iCnt, :, :]
iScale[i:i + patch_size[0], j:j + patch_size[1], h:h + patch_size[2]] += 1
iCnt += 1
if overlap_mode == 'avg':
iScale[iScale == 0] = 1
img = np.divide(img, iScale)
return img, iScale
| 42 | 151 | 0.490637 | 533 | 3,738 | 3.350844 | 0.125704 | 0.221725 | 0.078387 | 0.087346 | 0.805151 | 0.801792 | 0.767077 | 0.741321 | 0.741321 | 0.715566 | 0 | 0.055199 | 0.374799 | 3,738 | 88 | 152 | 42.477273 | 0.709029 | 0.143392 | 0 | 0.720588 | 0 | 0 | 0.003761 | 0 | 0 | 0 | 0 | 0.011364 | 0 | 1 | 0.029412 | false | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
39f55921de64e482f8a6ade33418139d7a1c65ad | 128 | py | Python | python/testData/completion/heavyStarPropagation/lib/_pkg0/_pkg0_1/_pkg0_1_0/_pkg0_1_0_1/_pkg0_1_0_1_0/_mod0_1_0_1_0_3.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/completion/heavyStarPropagation/lib/_pkg0/_pkg0_1/_pkg0_1_0/_pkg0_1_0_1/_pkg0_1_0_1_0/_mod0_1_0_1_0_3.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/completion/heavyStarPropagation/lib/_pkg0/_pkg0_1/_pkg0_1_0/_pkg0_1_0_1/_pkg0_1_0_1_0/_mod0_1_0_1_0_3.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | name0_1_0_1_0_3_0 = None
name0_1_0_1_0_3_1 = None
name0_1_0_1_0_3_2 = None
name0_1_0_1_0_3_3 = None
name0_1_0_1_0_3_4 = None | 14.222222 | 24 | 0.820313 | 40 | 128 | 1.875 | 0.175 | 0.266667 | 0.466667 | 0.533333 | 0.88 | 0.88 | 0.746667 | 0 | 0 | 0 | 0 | 0.318182 | 0.140625 | 128 | 9 | 25 | 14.222222 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
f2fb048511a13800e001ab112818ad397ae124ca | 888 | py | Python | example/test/core/camera/flat/super/unit.py | dmilos/IceRay | 4e01f141363c0d126d3c700c1f5f892967e3d520 | [
"MIT-0"
] | 2 | 2020-09-04T12:27:15.000Z | 2022-01-17T14:49:40.000Z | example/test/core/camera/flat/super/unit.py | dmilos/IceRay | 4e01f141363c0d126d3c700c1f5f892967e3d520 | [
"MIT-0"
] | null | null | null | example/test/core/camera/flat/super/unit.py | dmilos/IceRay | 4e01f141363c0d126d3c700c1f5f892967e3d520 | [
"MIT-0"
] | 1 | 2020-09-04T12:27:52.000Z | 2020-09-04T12:27:52.000Z | import IceRayCpp
def name( ):
return "camera_flat_super"
def perspective( ):
eye = IceRayCpp.MathTypeCoord2D().load(0,0)
view = IceRayCpp.MathTypeCoord2D().load(2,2)
camera = IceRayCpp.CameraFlatSuper( eye, view )
return { 'this': camera }
def orthogonal( ):
eye = IceRayCpp.MathTypeCoord2D().load( 2,2 )
view = IceRayCpp.MathTypeCoord2D().load( 2,2 )
camera = IceRayCpp.CameraFlatSuper( eye, view )
return { 'this': camera }
def X( ): #TODO
eye = IceRayCpp.MathTypeCoord2D().load(2,0)
view = IceRayCpp.MathTypeCoord2D().load(2,2)
camera = IceRayCpp.CameraFlatSuper( eye, view )
return { 'this': camera }
def Y( ):#TODO
eye = IceRayCpp.MathTypeCoord2D().load(0,2)
view = IceRayCpp.MathTypeCoord2D().load(2,2)
camera = IceRayCpp.CameraFlatSuper( eye, view )
return { 'this': camera }
| 30.62069 | 52 | 0.63964 | 98 | 888 | 5.77551 | 0.214286 | 0.339223 | 0.39576 | 0.30742 | 0.878092 | 0.637809 | 0.637809 | 0.637809 | 0.637809 | 0.637809 | 0 | 0.034732 | 0.221847 | 888 | 28 | 53 | 31.714286 | 0.78437 | 0.009009 | 0 | 0.521739 | 0 | 0 | 0.038824 | 0 | 0 | 0 | 0 | 0.035714 | 0 | 1 | 0.217391 | false | 0 | 0.043478 | 0.043478 | 0.478261 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f2feac796522e743d541447f45ac3a39d11c01d8 | 2,068 | py | Python | Data_Analysis/compare_policies.py | JennyLynnFletcher/RL_Environment_Design | dc42e668581c3a3901732230eb3561efd72a64a6 | [
"MIT"
] | 1 | 2021-05-30T11:41:08.000Z | 2021-05-30T11:41:08.000Z | Data_Analysis/compare_policies.py | JennyLynnFletcher/RL_Environment_Design | dc42e668581c3a3901732230eb3561efd72a64a6 | [
"MIT"
] | null | null | null | Data_Analysis/compare_policies.py | JennyLynnFletcher/RL_Environment_Design | dc42e668581c3a3901732230eb3561efd72a64a6 | [
"MIT"
] | null | null | null | import pickle
import math
#results = pickle.load( open( '/home/jenny/ray_results/PPO_ObstaclesEnv_2021-04-15_18-53-36kiwk8n0o/checkpoint_720/results.pkl', "rb" ) )
#results = pickle.load( open( '/home/jenny/ray_results/PPO_ObstaclesEnv_2021-04-15_18-53-36kiwk8n0o/checkpoint_649/results.pkl', "rb" ) )
results_440 = pickle.load( open( '/home/jenny/ray_results/PPO_ObstaclesEnv_2021-04-12_07-28-31nsbm6pce/checkpoint_440/results.pkl', "rb" ) )
#results = pickle.load( open( '/home/jenny/ray_results/PPO_ObstaclesEnv_2021-04-09_07-00-579qrzfh07/checkpoint_392/results.pkl', "rb" ) )
#results = pickle.load( open( '/home/jenny/ray_results/PPO_ObstaclesEnv_2021-04-09_07-00-579qrzfh07/checkpoint_310/results.pkl', "rb" ) )
#results = pickle.load( open( '/home/jenny/ray_results/PPO_ObstaclesEnv_2021-04-06_17-29-13embh0dvs/checkpoint_192/results.pkl', "rb" ) )
results_1 = pickle.load( open( '/home/jenny/ray_results/PPO_ObstaclesEnv_2021-04-06_17-29-13embh0dvs/checkpoint_1/results.pkl', "rb" ) )
instances = []
i = 0
instance = []
for result in results_440:
if result['instance'] > i:
instances.append(instance)
instance = []
i += 1
else:
instance.append(result)
rewards = []
early_termination_440 = 0
for instance in instances:
reward = []
for i in instance:
r = i['reward']
if r > -100:
reward.append(r)
early_termination_440 += 1
rewards.append(reward)
##########################################################################
instances = []
i = 0
instance = []
for result in results_1:
if result['instance'] > i:
instances.append(instance)
instance = []
i += 1
else:
instance.append(result)
rewards = []
early_termination_1 = 0
for instance in instances:
reward = []
for i in instance:
r = i['reward']
if r > -100:
reward.append(r)
early_termination_1 += 1
rewards.append(reward)
print(early_termination_440)
print(early_termination_1)
| 31.333333 | 140 | 0.641683 | 267 | 2,068 | 4.779026 | 0.205993 | 0.054859 | 0.076803 | 0.098746 | 0.82837 | 0.82837 | 0.82837 | 0.82837 | 0.770376 | 0.770376 | 0 | 0.098439 | 0.194391 | 2,068 | 65 | 141 | 31.815385 | 0.667467 | 0.32882 | 0 | 0.73913 | 0 | 0.043478 | 0.168067 | 0.143621 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
84139f895e5ceb4cd164f4fcb3026e558fef9879 | 320 | py | Python | Numeric Patterns/numericpattern114.py | vaidehisinha1/Python-PatternHouse | 49f71bcc5319a838592e69b0e49ef1edba32bf7c | [
"MIT"
] | null | null | null | Numeric Patterns/numericpattern114.py | vaidehisinha1/Python-PatternHouse | 49f71bcc5319a838592e69b0e49ef1edba32bf7c | [
"MIT"
] | 471 | 2022-01-15T07:07:18.000Z | 2022-02-28T16:01:42.000Z | Numeric Patterns/numericpattern114.py | vaidehisinha1/Python-PatternHouse | 49f71bcc5319a838592e69b0e49ef1edba32bf7c | [
"MIT"
] | 2 | 2022-01-17T09:43:16.000Z | 2022-01-29T15:15:47.000Z | print("Enter the number of rows: ")
n = int(input())
count = 2*n-1
for i in range(n):
print(" "*i + str(count)*(2*(n-i)-1))
count-=2
# OUTPUT
# print("Enter the number of rows: ")
# n = int(input())
# count = 2*n-1
# for i in range(n):
# print(" "*i + str(count)*(2*(n-i)-1))
# count-=2
| 18.823529 | 44 | 0.50625 | 57 | 320 | 2.842105 | 0.315789 | 0.222222 | 0.17284 | 0.234568 | 0.962963 | 0.962963 | 0.962963 | 0.962963 | 0.962963 | 0.962963 | 0 | 0.042194 | 0.259375 | 320 | 16 | 45 | 20 | 0.64135 | 0.4625 | 0 | 0 | 0 | 0 | 0.164634 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8459fb60057776eb4dc065748e7de1fd8b098378 | 20,466 | py | Python | FacePatchesModel.py | yecfly/DEPRESSIONEST | 21b72906aac9f310e264f7a5eea348480a647197 | [
"Unlicense"
] | null | null | null | FacePatchesModel.py | yecfly/DEPRESSIONEST | 21b72906aac9f310e264f7a5eea348480a647197 | [
"Unlicense"
] | null | null | null | FacePatchesModel.py | yecfly/DEPRESSIONEST | 21b72906aac9f310e264f7a5eea348480a647197 | [
"Unlicense"
] | null | null | null | import numpy as np
import tensorflow as tf
import cv2
import tflearn
from FaceProcessUtil import PreprocessImage as PPI
MAPPING = {0:'neutral', 1:'anger', 2:'surprise', 3:'disgust', 4:'fear', 5:'happy', 6:'sadness'}
MP = './models/'
DEFAULT_PADDING = 'SAME'
TypeThreshold=100
eye_p_shape=[None, 26, 64, 1]
midd_p_shape=[None, 49, 28, 1]
mou_p_shape=[None, 30, 54, 1]
###dependent modules for network definition
###
#4 network definition under tflearn
def FacePatches_NET_3Conv_IInception_tflear(eyep, middlep, mouthp):
e_net=tflearn.conv_2d(eyep, 8, 3, activation='relu',name='eye_conv1_1_3x3')
e_net=tflearn.conv_2d(e_net, 8, 3, activation='relu',name='eye_conv1_2_3x3')
e_net=tflearn.max_pool_2d(e_net,2,2,name='eye_pool1')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_1_3x3')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool2')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_1_3x3')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool3')
e_net=tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc1')
mi_net=tflearn.conv_2d(middlep, 8, 3, activation='relu',name='middle_conv1_1_3x3')
mi_net=tflearn.conv_2d(mi_net, 8, 3, activation='relu',name='middle_conv1_2_3x3')
mi_net=tflearn.max_pool_2d(mi_net,2,2,name='middle_pool1')
mi_net=tflearn.conv_2d(mi_net, 32, 3, activation='relu', name='middle_conv2_1_3x3')
mi_net=tflearn.conv_2d(mi_net, 32, 3, activation='relu', name='middle_conv2_2_3x3')
mi_net=tflearn.max_pool_2d(mi_net, 2, 2, name='middle_pool2')
mi_net=tflearn.conv_2d(mi_net, 128, 3, activation='relu', name='middle_conv3_1_3x3')
mi_net=tflearn.conv_2d(mi_net, 128, 3, activation='relu', name='middle_conv3_2_3x3')
mi_net=tflearn.max_pool_2d(mi_net, 2, 2, name='middle_pool3')
mi_net=tflearn.fully_connected(mi_net, 1024, activation='tanh', name='middle_fc1')
mo_net=tflearn.conv_2d(mouthp, 8, 3, activation='relu',name='mouth_conv1_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 8, 3, activation='relu',name='mouth_conv1_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net,2,2,name='mouth_pool1')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool2')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool3')
mo_net=tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc1')
fc_net=tf.concat([e_net,mi_net,mo_net], 1, name='fusion_1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc1')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc2')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop2')
softmax=tflearn.fully_connected(fc_net, 7, activation='softmax', name='prob')
return softmax
#5 network definition under tflearn
def FacePatches_NET_3Conv_2Inception_tflearn(eyep, middlep, mouthp):
e_net=tflearn.conv_2d(eyep, 8, 3, activation='relu',name='eye_conv1_1_3x3')
e_net=tflearn.conv_2d(e_net, 8, 3, activation='relu',name='eye_conv1_2_3x3')
e_net=tflearn.max_pool_2d(e_net,2,2,name='eye_pool1')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_1_3x3')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool2')
efc2 = tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc2')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_1_3x3')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool3')
e_net=tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc1')
e_net=tf.concat([e_net, efc2], 1, name='eye_fc')
mi_net=tflearn.conv_2d(middlep, 8, 3, activation='relu',name='middle_conv1_1_3x3')
mi_net=tflearn.conv_2d(mi_net, 8, 3, activation='relu',name='middle_conv1_2_3x3')
mi_net=tflearn.max_pool_2d(mi_net,2,2,name='middle_pool1')
mi_net=tflearn.conv_2d(mi_net, 32, 3, activation='relu', name='middle_conv2_1_3x3')
mi_net=tflearn.conv_2d(mi_net, 32, 3, activation='relu', name='middle_conv2_2_3x3')
mi_net=tflearn.max_pool_2d(mi_net, 2, 2, name='middle_pool2')
mifc2 = tflearn.fully_connected(mi_net, 1024, activation='tanh', name='middle_fc2')
mi_net=tflearn.conv_2d(mi_net, 128, 3, activation='relu', name='middle_conv3_1_3x3')
mi_net=tflearn.conv_2d(mi_net, 128, 3, activation='relu', name='middle_conv3_2_3x3')
mi_net=tflearn.max_pool_2d(mi_net, 2, 2, name='middle_pool3')
mi_net=tflearn.fully_connected(mi_net, 1024, activation='tanh', name='middle_fc1')
mi_net=tf.concat([mi_net, mifc2], 1, name='middle_fc')
mo_net=tflearn.conv_2d(mouthp, 8, 3, activation='relu',name='mouth_conv1_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 8, 3, activation='relu',name='mouth_conv1_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net,2,2,name='mouth_pool1')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool2')
mfc2 = tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc2')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool3')
mo_net=tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc1')
mo_net=tf.concat([mo_net, mfc2], 1, name='mouth_fc')
fc_net=tf.concat([e_net,mi_net,mo_net], 1, name='fusion_1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc1')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc2')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop2')
softmax=tflearn.fully_connected(fc_net, 7, activation='softmax', name='prob')
return softmax
#6 network definition under tflearn
def FacePatches_NET_3Conv_3Inception_tflearn(eyep, middlep, mouthp):
e_net=tflearn.conv_2d(eyep, 8, 3, activation='relu',name='eye_conv1_1_3x3')
e_net=tflearn.conv_2d(e_net, 8, 3, activation='relu',name='eye_conv1_2_3x3')
e_net=tflearn.max_pool_2d(e_net,2,2,name='eye_pool1')
efc3 = tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc3')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_1_3x3')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool2')
efc2 = tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc2')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_1_3x3')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool3')
e_net=tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc1')
e_net=tf.concat([e_net, efc2, efc3], 1, name='eye_fc')
mi_net=tflearn.conv_2d(middlep, 8, 3, activation='relu',name='middle_conv1_1_3x3')
mi_net=tflearn.conv_2d(mi_net, 8, 3, activation='relu',name='middle_conv1_2_3x3')
mi_net=tflearn.max_pool_2d(mi_net,2,2,name='middle_pool1')
mifc3 = tflearn.fully_connected(mi_net, 1024, activation='tanh', name='middle_fc3')
mi_net=tflearn.conv_2d(mi_net, 32, 3, activation='relu', name='middle_conv2_1_3x3')
mi_net=tflearn.conv_2d(mi_net, 32, 3, activation='relu', name='middle_conv2_2_3x3')
mi_net=tflearn.max_pool_2d(mi_net, 2, 2, name='middle_pool2')
mifc2 = tflearn.fully_connected(mi_net, 1024, activation='tanh', name='middle_fc2')
mi_net=tflearn.conv_2d(mi_net, 128, 3, activation='relu', name='middle_conv3_1_3x3')
mi_net=tflearn.conv_2d(mi_net, 128, 3, activation='relu', name='middle_conv3_2_3x3')
mi_net=tflearn.max_pool_2d(mi_net, 2, 2, name='middle_pool3')
mi_net=tflearn.fully_connected(mi_net, 1024, activation='tanh', name='middle_fc1')
mi_net=tf.concat([mi_net, mifc2, mifc3], 1, name='middle_fc')
mo_net=tflearn.conv_2d(mouthp, 8, 3, activation='relu',name='mouth_conv1_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 8, 3, activation='relu',name='mouth_conv1_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net,2,2,name='mouth_pool1')
mfc3 = tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc3')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool2')
mfc2 = tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc2')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool3')
mo_net=tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc1')
mo_net=tf.concat([mo_net, mfc2, mfc3], 1, name='mouth_fc')
fc_net=tf.concat([e_net,mi_net,mo_net], 1, name='fusion_1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc1')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc2')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop2')
softmax=tflearn.fully_connected(fc_net, 7, activation='softmax', name='prob')
return softmax
###using net 24
def FacePatches_NET_3C_1I_2P(eyep, mouthp):
###using net 24
e_net=tflearn.conv_2d(eyep, 8, 3, activation='relu',name='eye_conv1_1_3x3')
e_net=tflearn.conv_2d(e_net, 8, 3, activation='relu',name='eye_conv1_2_3x3')
e_net=tflearn.max_pool_2d(e_net,2,2,name='eye_pool1')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_1_3x3')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool2')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_1_3x3')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool3')
e_net=tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc1')
mo_net=tflearn.conv_2d(mouthp, 8, 3, activation='relu',name='mouth_conv1_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 8, 3, activation='relu',name='mouth_conv1_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net,2,2,name='mouth_pool1')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool2')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool3')
mo_net=tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc1')
fc_net=tf.concat([e_net, mo_net], 1, name='fusion_1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc1')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc2')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop2')
softmax=tflearn.fully_connected(fc_net, 7, activation='softmax', name='prob')
return softmax
###using net 25
def FacePatches_NET_3C_2I_2P(eyep, mouthp):
###using net 25
e_net=tflearn.conv_2d(eyep, 8, 3, activation='relu',name='eye_conv1_1_3x3')
e_net=tflearn.conv_2d(e_net, 8, 3, activation='relu',name='eye_conv1_2_3x3')
e_net=tflearn.max_pool_2d(e_net,2,2,name='eye_pool1')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_1_3x3')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool2')
efc2 = tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc2')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_1_3x3')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool3')
e_net=tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc1')
e_net=tf.concat([e_net, efc2], 1, name='eye_fc')
mo_net=tflearn.conv_2d(mouthp, 8, 3, activation='relu',name='mouth_conv1_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 8, 3, activation='relu',name='mouth_conv1_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net,2,2,name='mouth_pool1')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool2')
mfc2 = tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc2')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool3')
mo_net=tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc1')
mo_net=tf.concat([mo_net, mfc2], 1, name='mouth_fc')
fc_net=tf.concat([e_net, mo_net], 1, name='fusion_1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc1')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc2')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop2')
softmax=tflearn.fully_connected(fc_net, 7, activation='softmax', name='prob')
return softmax
###using net 26
def FacePatches_NET_3C_3I_2P(eyep, mouthp):
###using net 26
e_net=tflearn.conv_2d(eyep, 8, 3, activation='relu',name='eye_conv1_1_3x3')
e_net=tflearn.conv_2d(e_net, 8, 3, activation='relu',name='eye_conv1_2_3x3')
e_net=tflearn.max_pool_2d(e_net,2,2,name='eye_pool1')
efc3 = tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc3')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_1_3x3')
e_net=tflearn.conv_2d(e_net, 32, 3, activation='relu', name='eye_conv2_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool2')
efc2 = tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc2')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_1_3x3')
e_net=tflearn.conv_2d(e_net, 128, 3, activation='relu', name='eye_conv3_2_3x3')
e_net=tflearn.max_pool_2d(e_net, 2, 2, name='eye_pool3')
e_net=tflearn.fully_connected(e_net, 1024, activation='tanh', name='eye_fc1')
e_net=tf.concat([e_net, efc2, efc3], 1, name='eye_fc')
mo_net=tflearn.conv_2d(mouthp, 8, 3, activation='relu',name='mouth_conv1_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 8, 3, activation='relu',name='mouth_conv1_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net,2,2,name='mouth_pool1')
mfc3 = tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc3')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 32, 3, activation='relu', name='mouth_conv2_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool2')
mfc2 = tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc2')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_1_3x3')
mo_net=tflearn.conv_2d(mo_net, 128, 3, activation='relu', name='mouth_conv3_2_3x3')
mo_net=tflearn.max_pool_2d(mo_net, 2, 2, name='mouth_pool3')
mo_net=tflearn.fully_connected(mo_net, 1024, activation='tanh', name='mouth_fc1')
mo_net=tf.concat([mo_net, mfc2, mfc3], 1, name='mouth_fc')
fc_net=tf.concat([e_net,mo_net], 1, name='fusion_1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc1')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop1')
fc_net=tflearn.fully_connected(fc_net, 2048, activation='relu', name='fc2')
fc_net=tflearn.dropout(fc_net, 0.8, name='drop2')
softmax=tflearn.fully_connected(fc_net, 7, activation='softmax', name='prob')
return softmax
#
def getModelPathForPrediction(mid=0):
#if mid==300:
# mp=MP+'D502_M3_N3_T0_V0_R4_20171009235521_1.1895357370_.ckpt-16197'#0.9587
#elif mid==301:
# mp=MP+'D502_M3_N3_T4_V4_R4_20171010084104_1.2033878565_.ckpt-18110'#0.9165
#elif mid==303:
# mp=MP+'D502_M3_N3_T5_V5_R4_20171010103653_1.1808838844_.ckpt-19024'#0.9779
if mid==400:
mp=MP+'';
elif mid==500:
mp=MP+'';
elif mid==600:
mp=MP+'';
else:
print('Unexpected Model ID. TRY another one.')
exit(-1)
return mp
#model for prediction
class FacePatchesModel:
def __init__(self, mid=300):
###define the graph
self.networkGraph=tf.Graph()
with self.networkGraph.as_default():
self.eye_p = tf.placeholder(tf.float32, eye_p_shape)
self.mou_p = tf.placeholder(tf.float32, mou_p_shape)
#if (mid//TypeThreshold)==3:
# self.network = FacePatches_NET_3Conv_2Inception({'eyePatch_data':self.eye_p,
# 'middlePatch_data':self.midd_p,
# 'mouthPatch_data':self.mou_p})
if (mid//TypeThreshold)<7 and (mid//TypeThreshold)>3:
self.midd_p = tf.placeholder(tf.float32, midd_p_shape)
self.prob = FacePatches_NET_3Conv_IInception_tflear(self.eye_p,
self.midd_p, self.mou_p)
elif (mid//TypeThreshold) >23 and (mid//TypeThreshold) <27:
self.prob = FacePatches_NET_3Conv_IInception_tflear(self.eye_p, self.mou_p)
else:
print('ERROR: Unexpected network type. Try another mid')
exit(-1)
self.saver=tf.train.Saver()
###load pretrained model
self.sess=tf.InteractiveSession(graph=self.networkGraph)
try:
#must initialize the variables in the graph for compution or loading pretrained weights
self.sess.run(tf.variables_initializer(var_list=self.networkGraph.get_collection(name='variables')))
print('Network variables initialized.')
#the saver must define in the graph of its owner session, or it will occur error in restoration or saving
self.saver.restore(sess=self.sess, save_path=getModelPathForPrediction(mid))
print('Network Model loaded\n')
except:
print('ERROR: Unable to load the pretrained network.')
traceback.print_exc()
exit(2)
def predict(self, eye_p, midd_p, mou_p):#img must have the shape of [1, 128, 128, 1]
probability = self.prob.eval(feed_dict={self.eye_p:eye_p, self.midd_p:midd_p, self.mou_p:mou_p})
emotion = MAPPING[np.argmax(probability)]
return emotion, probability
| 62.018182 | 122 | 0.696765 | 3,432 | 20,466 | 3.837413 | 0.066725 | 0.132118 | 0.139408 | 0.109339 | 0.863022 | 0.846317 | 0.846317 | 0.846317 | 0.8347 | 0.8347 | 0 | 0.082982 | 0.153279 | 20,466 | 329 | 123 | 62.206687 | 0.677015 | 0.051305 | 0 | 0.800725 | 0 | 0 | 0.170939 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032609 | false | 0 | 0.018116 | 0 | 0.083333 | 0.021739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f279fcb242fc62b4d1040ecc780a0d6f3e766cb1 | 152 | py | Python | test/login.py | TheOrangeking/github | 156ce39d48f979179bf8eb4472e30768006365c2 | [
"MIT"
] | null | null | null | test/login.py | TheOrangeking/github | 156ce39d48f979179bf8eb4472e30768006365c2 | [
"MIT"
] | null | null | null | test/login.py | TheOrangeking/github | 156ce39d48f979179bf8eb4472e30768006365c2 | [
"MIT"
] | null | null | null | num1 = 10
num2 = 300
num3 = 49
num3 = 49
num3 = 49
num3 = 49
num3 = 49
num3 = 49
num3 = 49
num3 = 49
num3 = 49
num3 = 49
num3 = 49
num4 = 20
num5 = 25
| 8.941176 | 10 | 0.598684 | 30 | 152 | 3.033333 | 0.333333 | 0.725275 | 1.098901 | 1.318681 | 0.725275 | 0.725275 | 0.725275 | 0.725275 | 0.725275 | 0.725275 | 0 | 0.433962 | 0.302632 | 152 | 16 | 11 | 9.5 | 0.424528 | 0 | 0 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
f27f422381bd1bf512d4a59e075d9aee775bf36c | 12,372 | py | Python | env.py | chucnorrisful/dqn | 9ffc7674aa69907ab1e23c41bb59a6aea7a1400f | [
"MIT"
] | 14 | 2019-02-11T14:06:32.000Z | 2021-10-06T08:07:26.000Z | env.py | chucnorrisful/dqn | 9ffc7674aa69907ab1e23c41bb59a6aea7a1400f | [
"MIT"
] | null | null | null | env.py | chucnorrisful/dqn | 9ffc7674aa69907ab1e23c41bb59a6aea7a1400f | [
"MIT"
] | 6 | 2019-05-22T05:02:52.000Z | 2022-03-27T05:24:32.000Z | from rl.core import Env
from pysc2.env import sc2_env
from pysc2.lib import features
from pysc2.lib import actions
import numpy as np
FUNCTIONS = actions.FUNCTIONS
# Environment Wrapper für StarCraft2 (pysc2 Bibliothek)
# Erwartet als Action das Output-Format der FullyConv Netzwerk Architektur: ein Tupel bestehend aus zwei Arrays:
# - einem linearen, welches Q-Werte für jede unterschiedliche Aktion enthält
# - einem zweidimensionalen, welches Q-Werte für jede Koordinate auf dem Screen enthält
# Die Methode action_to_sc2 wandelt dabei diesen Output in für pysc2 verwendbare Actions um.
# Außerdem wird die Art der Observation definiert; hier werden aktuell zwei Feature-Layers übergeben:
# - feature_screen.player_relative (Ganzzahlige Klassen 0-3 für (Nichts, Spieler, Gegner, Neutral))
# - feature_screen.selected (1 für selektierte Einheit, 0 für rest)
# Die Klasse implementiert das Interface Keras-rl/core/Env.
class Sc2Env2Outputs(Env):
last_obs = None
def __init__(self, screen=16, visualize=False, env_name="MoveToBeacon", training=False):
print("init SC2")
self._SCREEN = screen
self._MINIMAP = screen
self._VISUALIZE = visualize
self._ENV_NAME = env_name
self._TRAINING = training
self.env = sc2_env.SC2Env(
map_name=self._ENV_NAME,
players=[sc2_env.Agent(sc2_env.Race.terran)],
agent_interface_format=features.AgentInterfaceFormat(
feature_dimensions=features.Dimensions(
screen=self._SCREEN,
minimap=self._MINIMAP
),
use_feature_units=True
),
step_mul=8,
game_steps_per_episode=0,
visualize=self._VISUALIZE
)
def action_to_sc2(self, act):
real_action = FUNCTIONS.no_op()
if act.action == 1:
if 331 in self.last_obs.observation.available_actions:
real_action = FUNCTIONS.Move_screen("now", (act.coords[1], act.coords[0]))
elif act.action == 2:
real_action = FUNCTIONS.select_point("toggle", (act.coords[1], act.coords[0]))
elif act.action == 0:
pass
else:
print(act.action, "wtf")
assert False
return real_action
def step(self, action):
# print(action, " ACTION")
real_action = self.action_to_sc2(action)
observation = self.env.step(actions=(real_action,))
self.last_obs = observation[0]
# small_observation = observation[0].observation.feature_screen.unit_density
small_observation = [observation[0].observation.feature_screen.player_relative,
observation[0].observation.feature_screen.selected]
return small_observation, observation[0].reward, observation[0].last(), {}
def reset(self):
observation = self.env.reset()
if self._TRAINING and np.random.random_integers(0, 1) == 4:
ys, xs = np.where(observation[0].observation.feature_screen.player_relative == 1)
observation = self.env.step(actions=(FUNCTIONS.select_point("toggle", (xs[0], ys[0])),))
observation = self.env.step(actions=(FUNCTIONS.select_army(0),))
self.last_obs = observation[0]
# small_observation = observation[0].observation.feature_screen.unit_density
small_observation = [observation[0].observation.feature_screen.player_relative,
observation[0].observation.feature_screen.selected]
return small_observation
def render(self, mode: str = 'human', close: bool = False):
pass
def close(self):
if self.env:
self.env.close()
def seed(self, seed=None):
if seed:
self.env._random_seed = seed
def configure(self, *args, **kwargs):
switcher = {
'_ENV_NAME': self.set_env_name,
'_SCREEN': self.set_screen,
'_MINIMAP': self.set_minimap,
'_VISUALIZE': self.set_visualize,
}
if kwargs is not None:
for key, value in kwargs:
func = switcher.get(key, lambda: print)
func(value)
def set_env_name(self, name: str):
self._ENV_NAME = name
def set_screen(self, screen: int):
self._SCREEN = screen
def set_visualize(self, visualize: bool):
self._VISUALIZE = visualize
def set_minimap(self, minimap: int):
self._MINIMAP = minimap
@property
def screen(self):
return self._SCREEN
# Selbes wie Sc2Env2Outputs, allerdings mit anderem Output:
# Gibt ALLE Screen-Feature-Layers zurück (war als Experiment nützlich, wird aber aktuell nicht verwendet).
class Sc2Env2OutputsFull(Env):
last_obs = None
def __init__(self, screen=16, visualize=False, env_name="MoveToBeacon", training=False):
print("init SC2")
self._SCREEN = screen
self._MINIMAP = screen
self._VISUALIZE = visualize
self._ENV_NAME = env_name
self._TRAINING = training
self.env = sc2_env.SC2Env(
map_name=self._ENV_NAME,
players=[sc2_env.Agent(sc2_env.Race.terran)],
agent_interface_format=features.AgentInterfaceFormat(
feature_dimensions=features.Dimensions(
screen=self._SCREEN,
minimap=self._MINIMAP
),
use_feature_units=True
),
step_mul=8,
game_steps_per_episode=0,
visualize=self._VISUALIZE
)
def action_to_sc2(self, act):
real_action = FUNCTIONS.no_op()
if act.action == 1:
if 331 in self.last_obs.observation.available_actions:
real_action = FUNCTIONS.Move_screen("now", (act.coords[1], act.coords[0]))
elif act.action == 2:
real_action = FUNCTIONS.select_point("toggle", (act.coords[1], act.coords[0]))
elif act.action == 0:
pass
else:
print(act.action, "wtf")
assert False
return real_action
def step(self, action):
real_action = self.action_to_sc2(action)
observation = self.env.step(actions=(real_action,))
self.last_obs = observation[0]
small_observation = observation[0].observation.feature_screen
# small_observation = [observation[0].observation.feature_screen.player_relative,
# observation[0].observation.feature_screen.selected]
return small_observation, observation[0].reward, observation[0].last(), {}
def reset(self):
self.env.reset()
# if self._TRAINING and np.random.random_integers(0, 1) == 4:
# ys, xs = np.where(observation[0].observation.feature_screen.player_relative == 1)
# observation = self.env.step(actions=(FUNCTIONS.select_point("toggle", (xs[0], ys[0])),))
observation = self.env.step(actions=(FUNCTIONS.select_army(0),))
self.last_obs = observation[0]
small_observation = observation[0].observation.feature_screen
# small_observation = [observation[0].observation.feature_screen.player_relative,
# observation[0].observation.feature_screen.selected]
return small_observation
def render(self, mode: str = 'human', close: bool = False):
pass
def close(self):
if self.env:
self.env.close()
def seed(self, seed=None):
if seed:
self.env._random_seed = seed
def configure(self, *args, **kwargs):
switcher = {
'_ENV_NAME': self.set_env_name,
'_SCREEN': self.set_screen,
'_MINIMAP': self.set_minimap,
'_VISUALIZE': self.set_visualize,
}
if kwargs is not None:
for key, value in kwargs:
func = switcher.get(key, lambda: print)
func(value)
def set_env_name(self, name: str):
self._ENV_NAME = name
def set_screen(self, screen: int):
self._SCREEN = screen
def set_visualize(self, visualize: bool):
self._VISUALIZE = visualize
def set_minimap(self, minimap: int):
self._MINIMAP = minimap
@property
def screen(self):
return self._SCREEN
# Environment Wrapper für StarCraft2 (pysc2 Bibliothek)
# Diese Version erwartet als Action einen einzelnen Vektor mit Q-Values für entsprechende Aktionen.
# Durch die FullyConv Architektur, welche 2 Outputs verschiedener Dimension bereitstellt ist dies nicht mehr verwendbar.
# Aus historischen Gründen noch nicht gelöscht.
class Sc2Env1Output(Env):
last_obs = None
def __init__(self, screen=16, visualize=False, env_name="MoveToBeacon", training=False):
print("init SC2")
self._SCREEN = screen
self._MINIMAP = screen
self._VISUALIZE = visualize
self._ENV_NAME = env_name
self._TRAINING = training
self.env = sc2_env.SC2Env(
map_name=self._ENV_NAME,
players=[sc2_env.Agent(sc2_env.Race.terran)],
agent_interface_format=features.AgentInterfaceFormat(
feature_dimensions=features.Dimensions(
screen=self._SCREEN,
minimap=self._MINIMAP
),
use_feature_units=True
),
step_mul=8,
game_steps_per_episode=0,
visualize=self._VISUALIZE
)
def action_to_sc2(self, act):
real_action = FUNCTIONS.no_op()
# hacked to only move_screen
if 0 < act <= self._SCREEN * self._SCREEN:
if 331 in self.last_obs.observation.available_actions:
arg = act - 1
x = int(arg / self._SCREEN)
y = arg % self._SCREEN
real_action = FUNCTIONS.Move_screen("now", (y, x))
elif self._SCREEN * self._SCREEN < act < self._SCREEN * self._SCREEN * 2:
# if FUNCTIONS.select_point.id in self.last_obs.observation.available_actions:
arg = act - 1 - self._SCREEN * self._SCREEN
x = int(arg / self._SCREEN)
y = arg % self._SCREEN
real_action = FUNCTIONS.select_point("toggle", (y, x))
return real_action
def step(self, action):
# print(action, " ACTION")
real_action = self.action_to_sc2(action)
observation = self.env.step(actions=(real_action,))
self.last_obs = observation[0]
small_observation = [observation[0].observation.feature_screen.player_relative, observation[0].observation.feature_screen.selected]
return small_observation, observation[0].reward, observation[0].last(), {}
def reset(self):
observation = self.env.reset()
if self._TRAINING and np.random.random_integers(1, 1) == 1:
ys, xs = np.where(observation[0].observation.feature_screen.player_relative == 1)
observation = self.env.step(actions=(FUNCTIONS.select_point("toggle", (xs[0], ys[0])),))
# observation = self.env.step(actions=(FUNCTIONS.select_army()))
self.last_obs = observation[0]
small_observation = np.array([observation[0].observation.feature_screen.player_relative, observation[0].observation.feature_screen.selected])
return small_observation
def render(self, mode: str = 'human', close: bool = False):
pass
def close(self):
if self.env:
self.env.close()
def seed(self, seed=None):
if seed:
self.env._random_seed = seed
def configure(self, *args, **kwargs):
switcher = {
'_ENV_NAME': self.set_env_name,
'_SCREEN': self.set_screen,
'_MINIMAP': self.set_minimap,
'_VISUALIZE': self.set_visualize,
}
if kwargs is not None:
for key, value in kwargs:
func = switcher.get(key, lambda: print)
func(value)
def set_env_name(self, name: str):
self._ENV_NAME = name
def set_screen(self, screen: int):
self._SCREEN = screen
def set_visualize(self, visualize: bool):
self._VISUALIZE = visualize
def set_minimap(self, minimap: int):
self._MINIMAP = minimap
| 32.904255 | 149 | 0.621484 | 1,441 | 12,372 | 5.133241 | 0.15406 | 0.031229 | 0.059078 | 0.077058 | 0.840341 | 0.828714 | 0.812221 | 0.806949 | 0.806949 | 0.806273 | 0 | 0.015031 | 0.279421 | 12,372 | 375 | 150 | 32.992 | 0.814694 | 0.172163 | 0 | 0.893878 | 0 | 0 | 0.021741 | 0 | 0 | 0 | 0 | 0 | 0.008163 | 1 | 0.155102 | false | 0.020408 | 0.020408 | 0.008163 | 0.244898 | 0.032653 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f297b51bedec2178236559ebb0e45831ae1341e5 | 1,635 | py | Python | apis_core/apis_labels/migrations/0003_auto_20200221_1731.py | acdh-oeaw/apis-core | f7ece05eec46c820321fd28d3e947653dcb98ae7 | [
"MIT"
] | 11 | 2018-07-11T18:11:40.000Z | 2022-03-25T11:07:12.000Z | apis_core/apis_labels/migrations/0003_auto_20200221_1731.py | acdh-oeaw/apis-core | f7ece05eec46c820321fd28d3e947653dcb98ae7 | [
"MIT"
] | 309 | 2018-06-11T08:38:50.000Z | 2022-03-31T13:45:22.000Z | apis_core/apis_labels/migrations/0003_auto_20200221_1731.py | acdh-oeaw/apis-core | f7ece05eec46c820321fd28d3e947653dcb98ae7 | [
"MIT"
] | 5 | 2017-08-21T10:37:07.000Z | 2021-09-27T19:08:47.000Z | # Generated by Django 2.1.2 on 2020-02-21 17:31
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('apis_labels', '0002_auto_20200121_1227'),
]
operations = [
migrations.AddField(
model_name='label',
name='end_date',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='label',
name='end_date_written',
field=models.CharField(blank=True, max_length=255, null=True, verbose_name='End'),
),
migrations.AddField(
model_name='label',
name='end_end_date',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='label',
name='end_start_date',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='label',
name='start_date',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='label',
name='start_date_written',
field=models.CharField(blank=True, max_length=255, null=True, verbose_name='Start'),
),
migrations.AddField(
model_name='label',
name='start_end_date',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='label',
name='start_start_date',
field=models.DateField(blank=True, null=True),
),
]
| 30.277778 | 96 | 0.559633 | 168 | 1,635 | 5.267857 | 0.261905 | 0.162712 | 0.20791 | 0.244068 | 0.818079 | 0.818079 | 0.818079 | 0.727684 | 0.687006 | 0.635028 | 0 | 0.033243 | 0.319266 | 1,635 | 53 | 97 | 30.849057 | 0.761905 | 0.027523 | 0 | 0.638298 | 1 | 0 | 0.119647 | 0.014484 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021277 | 0 | 0.085106 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
29a04d1917011420eeab42b7fb51d15f7c0078cd | 278 | py | Python | mfr/extensions/tabular/exceptions.py | alexschiller/modular-file-renderer | 43d59f2a8f4eb210fe8cb844b0a5a1a0ae057a0d | [
"Apache-2.0"
] | null | null | null | mfr/extensions/tabular/exceptions.py | alexschiller/modular-file-renderer | 43d59f2a8f4eb210fe8cb844b0a5a1a0ae057a0d | [
"Apache-2.0"
] | null | null | null | mfr/extensions/tabular/exceptions.py | alexschiller/modular-file-renderer | 43d59f2a8f4eb210fe8cb844b0a5a1a0ae057a0d | [
"Apache-2.0"
] | null | null | null | from mfr.core.exceptions import RendererError
class MissingRequirementsException(RendererError):
pass
class EmptyTableException(RendererError):
pass
class TableTooBigException(RendererError):
pass
class UnexpectedFormattingException(RendererError):
pass
| 15.444444 | 51 | 0.805755 | 22 | 278 | 10.181818 | 0.545455 | 0.303571 | 0.294643 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143885 | 278 | 17 | 52 | 16.352941 | 0.941176 | 0 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.444444 | 0.111111 | 0 | 0.555556 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
29a52197e8a3797a6b9e00e113e88a3a5b278e74 | 9,536 | py | Python | more/body_model/tests/test_body_model.py | morepath/more.body_model | 6f581224adfefb42a9e140350937aa8b4b1de921 | [
"BSD-3-Clause"
] | null | null | null | more/body_model/tests/test_body_model.py | morepath/more.body_model | 6f581224adfefb42a9e140350937aa8b4b1de921 | [
"BSD-3-Clause"
] | 4 | 2017-02-22T15:26:06.000Z | 2021-01-20T14:35:30.000Z | more/body_model/tests/test_body_model.py | morepath/more.body_model | 6f581224adfefb42a9e140350937aa8b4b1de921 | [
"BSD-3-Clause"
] | null | null | null | import pytest
from webtest import TestApp as Client
import morepath
from more.body_model import BodyModelApp
def test_json_obj_dump():
class App(BodyModelApp):
pass
@App.path(path="/models/{x}")
class Model:
def __init__(self, x):
self.x = x
@App.json(model=Model)
def default(self, request):
return self
@App.dump_json(model=Model)
def dump_model_json(self, request):
return {"x": self.x}
c = Client(App())
response = c.get("/models/foo")
assert response.json == {"x": "foo"}
def test_json_obj_load():
class App(BodyModelApp):
pass
class Collection:
def __init__(self):
self.items = []
def add(self, item):
self.items.append(item)
collection = Collection()
@App.path(path="/", model=Collection)
def get_collection():
return collection
@App.json(model=Collection, request_method="POST")
def default(self, request):
self.add(request.body_obj)
return "done"
class Item:
def __init__(self, value):
self.value = value
@App.load_json()
def load_json(json, request):
return Item(json["x"])
c = Client(App())
c.post_json("/", {"x": "foo"})
assert len(collection.items) == 1
assert isinstance(collection.items[0], Item)
assert collection.items[0].value == "foo"
def test_json_obj_load_app_arg():
class App(BodyModelApp):
pass
class Collection:
def __init__(self):
self.items = []
def add(self, item):
self.items.append(item)
collection = Collection()
@App.path(path="/", model=Collection)
def get_collection():
return collection
@App.json(model=Collection, request_method="POST")
def default(self, request):
self.add(request.body_obj)
return "done"
class Item:
def __init__(self, value):
self.value = value
@App.load_json()
def load_json(app, json, request):
assert isinstance(app, App)
return Item(json["x"])
c = Client(App())
c.post_json("/", {"x": "foo"})
assert len(collection.items) == 1
assert isinstance(collection.items[0], Item)
assert collection.items[0].value == "foo"
def test_json_obj_load_default():
class App(BodyModelApp):
pass
class Root:
pass
@App.path(path="/", model=Root)
def get_root():
return Root()
@App.json(model=Root, request_method="POST")
def default(self, request):
assert request.body_obj == request.json
return "done"
c = Client(App())
c.post_json("/", {"x": "foo"})
def test_json_body_model():
class App(BodyModelApp):
pass
class Collection:
def __init__(self):
self.items = []
def add(self, item):
self.items.append(item)
class Item1:
def __init__(self, value):
self.value = value
class Item2:
def __init__(self, value):
self.value = value
collection = Collection()
@App.path(path="/", model=Collection)
def get_collection():
return collection
@App.json(model=Collection, request_method="POST", body_model=Item1)
def default(self, request):
self.add(request.body_obj)
return "done"
@App.load_json()
def load_json(json, request):
if json["@type"] == "Item1":
return Item1(json["x"])
elif json["@type"] == "Item2":
return Item2(json["x"])
c = Client(App())
c.post_json("/", {"@type": "Item1", "x": "foo"})
assert len(collection.items) == 1
assert isinstance(collection.items[0], Item1)
assert collection.items[0].value == "foo"
c.post_json("/", {"@type": "Item2", "x": "foo"}, status=422)
@pytest.mark.xfail(reason="body_model doesn't work on mounted app")
def test_json_body_model_on_mounted_app():
class BaseApp(morepath.App):
pass
class BMApp(BodyModelApp):
pass
class Collection:
def __init__(self):
self.items = []
def add(self, item):
self.items.append(item)
class Item1:
def __init__(self, value):
self.value = value
class Item2:
def __init__(self, value):
self.value = value
collection = Collection()
@BMApp.path(path="/", model=Collection)
def get_collection():
return collection
@BaseApp.mount(app=BMApp, path="bm")
def get_bmapp():
return BMApp()
@BMApp.json(model=Collection, request_method="POST", body_model=Item1)
def default(self, request):
self.add(request.body_obj)
return "done"
@BMApp.load_json()
def load_json(json, request):
if json["@type"] == "Item1":
return Item1(json["x"])
elif json["@type"] == "Item2":
return Item2(json["x"])
c = Client(BaseApp())
c.post_json("/bm", {"@type": "Item1", "x": "foo"})
assert len(collection.items) == 1
assert isinstance(collection.items[0], Item1)
assert collection.items[0].value == "foo"
c.post_json("/bm", {"@type": "Item2", "x": "foo"}, status=422)
def test_json_body_model_on_mounting_and_mounted_app():
class BaseApp(BodyModelApp):
pass
class BMApp(BodyModelApp):
pass
class Collection:
def __init__(self):
self.items = []
def add(self, item):
self.items.append(item)
class Item1:
def __init__(self, value):
self.value = value
class Item2:
def __init__(self, value):
self.value = value
collection = Collection()
@BMApp.path(path="/", model=Collection)
def get_collection():
return collection
@BaseApp.mount(app=BMApp, path="bm")
def get_bmapp():
return BMApp()
@BMApp.json(model=Collection, request_method="POST", body_model=Item1)
def default(self, request):
self.add(request.body_obj)
return "done"
@BMApp.load_json()
def load_json(json, request):
if json["@type"] == "Item1":
return Item1(json["x"])
elif json["@type"] == "Item2":
return Item2(json["x"])
c = Client(BaseApp())
c.post_json("/bm", {"@type": "Item1", "x": "foo"})
assert len(collection.items) == 1
assert isinstance(collection.items[0], Item1)
assert collection.items[0].value == "foo"
c.post_json("/bm", {"@type": "Item2", "x": "foo"}, status=422)
def test_json_body_model_subapp():
class RootApp(BodyModelApp):
pass
class App(RootApp):
pass
class Collection:
def __init__(self):
self.items = []
def add(self, item):
self.items.append(item)
class Item1:
def __init__(self, value):
self.value = value
class Item2:
def __init__(self, value):
self.value = value
collection = Collection()
@App.path(path="/bm", model=Collection)
def get_collection():
return collection
@App.json(model=Collection, request_method="POST", body_model=Item1)
def default(self, request):
self.add(request.body_obj)
return "done"
@App.load_json()
def load_json(json, request):
if json["@type"] == "Item1":
return Item1(json["x"])
elif json["@type"] == "Item2":
return Item2(json["x"])
c = Client(App())
c.post_json("/bm", {"@type": "Item1", "x": "foo"})
assert len(collection.items) == 1
assert isinstance(collection.items[0], Item1)
assert collection.items[0].value == "foo"
c.post_json("/bm", {"@type": "Item2", "x": "foo"}, status=422)
def test_json_obj_load_no_json_post():
class App(BodyModelApp):
pass
class Root:
pass
@App.path(path="/", model=Root)
def get_root():
return Root()
@App.json(model=Root, request_method="POST")
def default(self, request):
assert request.body_obj is None
return "done"
c = Client(App())
response = c.post("/", {"x": "foo"})
assert response.json == "done"
def test_load_interaction():
class App(BodyModelApp):
pass
@App.path(path="/")
class Root:
pass
class A:
pass
class B:
pass
class Error(Exception):
pass
@App.load_json()
def load_json(json, request):
letter = json["letter"]
if letter == "a":
return A()
elif letter == "b":
return B()
else:
raise Error()
def load(request):
return request.body_obj
@App.json(model=Root, request_method="POST", load=load, body_model=A)
def root_post_a(self, request, obj):
assert request.body_obj is obj
if isinstance(obj, A):
return "this is a"
assert False, "never reached"
@App.json(model=Root, request_method="POST", load=load, body_model=B)
def root_post_b(self, request, obj):
assert request.body_obj is obj
if isinstance(obj, B):
return "this is b"
assert False, "never reached"
app = App()
client = Client(app)
r = client.post_json("/", {"letter": "a"})
assert r.json == "this is a"
r = client.post_json("/", {"letter": "b"})
assert r.json == "this is b"
with pytest.raises(Error):
client.post_json("/", {"letter": "c"})
| 22.923077 | 74 | 0.575818 | 1,160 | 9,536 | 4.573276 | 0.077586 | 0.03393 | 0.03525 | 0.03016 | 0.838077 | 0.800189 | 0.784543 | 0.771348 | 0.758718 | 0.758718 | 0 | 0.010194 | 0.279887 | 9,536 | 415 | 75 | 22.978313 | 0.762342 | 0 | 0 | 0.77027 | 0 | 0 | 0.053482 | 0 | 0 | 0 | 0 | 0 | 0.097973 | 1 | 0.212838 | false | 0.064189 | 0.013514 | 0.047297 | 0.466216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
d9afa310e180c0ad4eabd19d16e282fb10a3a5ab | 78 | py | Python | Exercises3/R-3.18.py | opnsesame/Data-Structures-and-Algorithms-Exercises | 62f4066c6370225a41295ecb08e05258b08f6d7e | [
"Apache-2.0"
] | null | null | null | Exercises3/R-3.18.py | opnsesame/Data-Structures-and-Algorithms-Exercises | 62f4066c6370225a41295ecb08e05258b08f6d7e | [
"Apache-2.0"
] | null | null | null | Exercises3/R-3.18.py | opnsesame/Data-Structures-and-Algorithms-Exercises | 62f4066c6370225a41295ecb08e05258b08f6d7e | [
"Apache-2.0"
] | null | null | null | '''
Show that 2^(n+1) is O(2^n).
'''
2^(n+1) = 2*2^n
hence 2^(n+1) is O(2^n)
| 11.142857 | 28 | 0.448718 | 23 | 78 | 1.521739 | 0.347826 | 0.342857 | 0.257143 | 0.285714 | 0.457143 | 0.457143 | 0.457143 | 0 | 0 | 0 | 0 | 0.15873 | 0.192308 | 78 | 6 | 29 | 13 | 0.396825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d9afbf8b6c6aa5576d6535b77133e109f9dc817b | 19,555 | py | Python | LDDMM/optimization.py | HughLDDMM/TreeLDDMMCVPR | 528a79d39617481b5dde53bccefa86e9eb7e489a | [
"MIT"
] | 1 | 2021-11-23T08:58:01.000Z | 2021-11-23T08:58:01.000Z | LDDMM/optimization.py | HughLDDMM/TreeLDDMMCVPR | 528a79d39617481b5dde53bccefa86e9eb7e489a | [
"MIT"
] | null | null | null | LDDMM/optimization.py | HughLDDMM/TreeLDDMMCVPR | 528a79d39617481b5dde53bccefa86e9eb7e489a | [
"MIT"
] | 1 | 2022-03-27T15:00:00.000Z | 2022-03-27T15:00:00.000Z | ######################### perform optimization ##############################
import torch
from scipy.optimize import minimize
import numpy as np
import time
import sys
import os
sys.path.append(os.path.abspath("../IO"))
from import_export_vtk import export_momenta
from keops_utils import TestCuda
import pickle
params_opt=dict({"lr" : 1,"maxcor" : 10, "gtol" : 1e-3, "tol" : 1e-3, "use_scipy" : True, "method" : 'SLSQP'})
use_cuda,torchdeviceId,torchdtype,KeOpsdeviceId,KeOpsdtype,KernelMethod = TestCuda()
def opt(loss,p0,q0, maxiter = 100, folder2save = '',savename = ''):
"""
Optimization function calling either scipy or torch method.
p0 is the variable to optimize, and can either be the initial momenta or a quaternion depending on the deformation one want to implement.
"""
lr = params_opt["lr"]
maxcor = params_opt["maxcor"]
gtol = params_opt["gtol"]
tol = params_opt["tol"]
use_scipy = params_opt["use_scipy"] #If use_scipy : perform otpimization with LBFGS on scipy.
method = params_opt["method"]
options = dict( maxiter = maxiter,
ftol = tol,
gtol = gtol,
maxcor = maxcor # Number of previous gradients used to approximate the Hessian
)
loss_dict = {}
loss_dict['A'] = [0]
loss_dict['E'] = [0]
optimizer = torch.optim.LBFGS([p0], line_search_fn='strong_wolfe')
start = time.time()
print('performing optimization...')
opt.nit = -1
def closure():
opt.nit += 1; it = opt.nit
optimizer.zero_grad()
gamma,E,A = loss(p0,q0)
L = gamma*E+A
L.backward(retain_graph=True) #ATTENTION, CHANGE POUR ENCHAINER APRES RIGIDE, SINON ENLEVER RETAIN GRAPH !!!
print("Iteration ",it)
if(folder2save != ''):
if(opt.nit % 5 == 0):
loss_dict['A'].append(float(A.detach().cpu().numpy()))
loss_dict['E'].append(float(E.detach().cpu().numpy()))
return L
# Optimisation using scipy : we need to transfer the data from variable to float64
def numpy_closure(vec):
vec = lr*vec.astype('float64')
numpy_to_model(p0,vec)
c = closure().data.view(-1).cpu().numpy()[0]
dvec = model_to_numpy(p0,grad = True)
return (c,dvec)
def model_to_numpy(p, grad=False) :
if grad :
tensors = p.grad.data.view(-1).cpu().numpy()
else :
tensors = p.data.view(-1).cpu().numpy()
return np.ascontiguousarray( np.hstack(tensors) , dtype='float64' )
def numpy_to_model(p, vec) :
p.data = torch.from_numpy(vec).view(p.data.size()).type(p.data.type())
if use_scipy :
res = minimize( numpy_closure, # function to minimize
model_to_numpy(p0), # starting estimate
method = method,
jac = True, # matching_problems also returns the gradient
options = options)
print(res.message)
else :
for i in range(int(maxiter/20)+1): # Fixed number of iterations
optimizer.step(closure) # "Gradient descent" step.
total_time = round(time.time()-start,2)
print('Optimization time : ',total_time,' seconds')
if(folder2save != ''):
try:
os.mkdir(folder2save)
except OSError:
pass
loss_dict['Time'] = total_time
loss_dict['it'] = opt.nit
with open(folder2save+'/dict_'+savename+'.pkl','wb') as f:
pickle.dump(loss_dict,f)
return (p0,opt.nit,total_time)
def multiscale_opt(loss,p0,q0, maxiter = 100,folder2save = '',savename = ''):
lr = params_opt["lr"]
maxcor = params_opt["maxcor"]
gtol = params_opt["gtol"]
tol = params_opt["tol"]
use_scipy = params_opt["use_scipy"] #If use_scipy : perform otpimization with LBFGS on scipy.
method = params_opt["method"]
options = dict( maxiter = maxiter,
ftol = tol,
gtol = gtol,
maxcor = maxcor # Number of previous gradients used to approximate the Hessian
)
loss_dict = {}
loss_dict['A'] = []
loss_dict['E'] = []
loss_dict['E0'] = []
loss_dict['E1'] = []
loss_dict['E2'] = []
loss_dict['E3'] = []
optimizer = torch.optim.LBFGS([p0], line_search_fn='strong_wolfe')
start = time.time()
print('performing optimization...')
opt.nit = -1
def closure():
opt.nit += 1; it = opt.nit
optimizer.zero_grad()
E_list,E,A = loss(p0,q0)
L = E+A
L.backward(retain_graph=True) #ATTENTION, CHANGE POUR ENCHAINER APRES RIGIDE, SINON ENLEVER RETAIN GRAPH !!!
print("Iteration ",it)
print('E : ', E, " A : ", A)
if(folder2save != ''):
if(opt.nit % 5 == 0):
loss_dict['A'].append(float(A.detach().cpu().numpy()))
loss_dict['E'].append(float(E.detach().cpu().numpy()))
for i,E_i in enumerate(E_list):
loss_dict['E'+str(i)].append(float(E_i.detach().cpu().numpy()))
return L
# Optimisation using scipy : we need to transfer the data from variable to float64
def numpy_closure(vec):
vec = lr*vec.astype('float64')
numpy_to_model(p0,vec)
c = closure().data.view(-1).cpu().numpy()[0]
dvec = model_to_numpy(p0,grad = True)
return (c,dvec)
def model_to_numpy(p, grad=False) :
if grad :
tensors = p.grad.data.view(-1).cpu().numpy()
else :
tensors = p.data.view(-1).cpu().numpy()
return np.ascontiguousarray( np.hstack(tensors) , dtype='float64' )
def numpy_to_model(p, vec) :
p.data = torch.from_numpy(vec).view(p.data.size()).type(p.data.type())
#pdb.set_trace()
#print(p0)
if use_scipy :
res = minimize( numpy_closure, # function to minimize
model_to_numpy(p0), # starting estimate
method = method,
jac = True, # matching_problems also returns the gradient
options = options)
print(res.message)
else :
for i in range(int(maxiter/20)+1): # Fixed number of iterations
optimizer.step(closure) # "Gradient descent" step.
total_time = round(time.time()-start,2)
print('Optimization time : ',total_time,' seconds')
if(folder2save != ''):
try:
os.mkdir(folder2save)
except OSError:
pass
with open(folder2save+'/dict_'+savename+'.pkl','wb') as f:
pickle.dump(loss_dict,f)
return (p0,opt.nit,total_time)
def template_opt(loss,P0,template, maxiter = 100):
"""
Here P0 is the list of initial moments.
Template is also a variable.
"""
lr = params_opt["lr"]
maxcor = params_opt["maxcor"]
gtol = params_opt["gtol"]
tol = params_opt["tol"]
use_scipy = params_opt["use_scipy"] #If use_scipy : perform otpimization with LBFGS on scipy.
method = params_opt["method"]
options = dict( maxiter = maxiter,
ftol = tol,
gtol = gtol,
maxcor = maxcor # Number of previous gradients used to approximate the Hessian
)
Variables = []
for k,tensor in enumerate(P0):
Variables+=[tensor]
Variables+=[template]
optimizer = torch.optim.LBFGS(Variables,max_eval=maxiter,lr=lr, line_search_fn='strong_wolfe')
start = time.time()
print('performing optimization...')
opt.nit = -1
def closure():
opt.nit += 1; it = opt.nit
optimizer.zero_grad()
L = loss(P0,template)
L.backward(retain_graph=True)
print("Iteration ",it,", Cost = ", L.data.view(-1).cpu().numpy()[0])
return L
# Optpimisation using scipy : we need to transfer the data from variable to float64
def numpy_closure(vec):
vec = lr*vec.astype('float64')
numpy_to_model(Variables,vec)
c = closure().data.view(-1).cpu().numpy()[0]
dvec = model_to_numpy(Variables,grad = True)
return (c,dvec)
def model_to_numpy(Variables, grad=False) :
if grad :
tensors = [var.grad.data.view(-1).cpu().numpy() for var in Variables]
np.stack(tensors,axis=0)
else :
tensors = [var.data.view(-1).cpu().numpy() for var in Variables]
np.stack(tensors,axis=0)
tensor = np.ascontiguousarray( np.hstack((tensors)) , dtype='float64' )
return tensor
def numpy_to_model(torch_obj_list, np_obj) :
""" Take the numpy 1d vector of parameters and reshape it into the different tensors (moment+template) """
n_tensors = len(torch_obj_list)
len_obj = np_obj.shape[0]/n_tensors
assert len_obj==int(len_obj),'The numpy object size is no multiple of the number of tensors'
len_obj=int(len_obj)
for k,tensor in enumerate(torch_obj_list):
tensor.data = torch.from_numpy(np_obj[k*len_obj:(k+1)*len_obj]).view(tensor.data.size()).type(tensor.data.type())
#pdb.set_trace()
#print(p0)
if use_scipy :
res = minimize( numpy_closure, # function to minimize
model_to_numpy(Variables), # starting estimate
method = method,
jac = True, # matching_problems also returns the gradient
options = options )
print(res.message)
else :
for i in range(int(maxiter/20)+1): # Fixed number of iterations
optimizer.step(closure) # "Gradient descent" step.
total_time = round(time.time()-start,2)
print('Optimization time : ',total_time,' seconds')
#if use_scipy:
#numpy_to_model(p0,res.x)
#print(p0)
return (Variables[:-1],Variables[-1],opt.nit,total_time)
def flow_opt(loss,x0,p0,q0, maxiter = 100,folder2save = '',savename = ''):
lr = params_opt["lr"]
maxcor = params_opt["maxcor"]
gtol = params_opt["gtol"]
tol = params_opt["tol"]
use_scipy = params_opt["use_scipy"] #If use_scipy : perform otpimization with LBFGS on scipy.
method = params_opt["method"]
options = dict( maxiter = maxiter,
ftol = tol,
gtol = gtol,
maxcor = maxcor # Number of previous gradients used to approximate the Hessian
)
optimizer = torch.optim.LBFGS([p0], line_search_fn='strong_wolfe')
start = time.time()
print('performing optimization...')
opt.nit = -1
def closure():
opt.nit += 1; it = opt.nit
optimizer.zero_grad()
L = loss(x0,p0,q0)
L.backward(retain_graph=True) #ATTENTION, CHANGE POUR ENCHAINER APRES RIGIDE, SINON ENLEVER RETAIN GRAPH !!!
if(folder2save != ''):
if(it==10 or it==50 or it==100 or it==500):
temp = q0.detach().cpu().numpy()
p0_np = p0.detach().cpu().numpy()
export_momenta(temp, p0_np, 'Iter_'+str(it)+'_Momenta_'+savename, folder2save)
return L
# Optimisation using scipy : we need to transfer the data from variable to float64
def numpy_closure(vec):
vec = lr*vec.astype('float64')
numpy_to_model(p0,vec)
c = closure().data.view(-1).cpu().numpy()[0]
dvec = model_to_numpy(p0,grad = True)
return (c,dvec)
def model_to_numpy(p, grad=False) :
if grad :
tensors = p.grad.data.view(-1).cpu().numpy()
else :
tensors = p.data.view(-1).cpu().numpy()
return np.ascontiguousarray( np.hstack(tensors) , dtype='float64' )
def numpy_to_model(p, vec) :
p.data = torch.from_numpy(vec).view(p.data.size()).type(p.data.type())
if use_scipy :
res = minimize( numpy_closure, # function to minimize
model_to_numpy(p0), # starting estimate
method = method,
jac = True, # matching_problems also returns the gradient
options = options)
print(res.message)
else :
for i in range(int(maxiter/20)+1): # Fixed number of iterations
optimizer.step(closure) # "Gradient descent" step.
total_time = round(time.time()-start,2)
print('Optimization time : ',total_time,' seconds')
return (p0,opt.nit,total_time)
def rigid_lddmm_opt(loss, quat0, p0, q0, maxiter = 100,folder2save = '',savename = ''):
lr = params_opt["lr"]
maxcor = params_opt["maxcor"]
gtol = params_opt["gtol"]
tol = params_opt["tol"]
use_scipy = params_opt["use_scipy"] #If use_scipy : perform otpimization with LBFGS on scipy.
method = params_opt["method"]
options = dict( maxiter = maxiter,
ftol = tol,
gtol = gtol,
maxcor = maxcor # Number of previous gradients used to approximate the Hessian
)
optimizer = torch.optim.LBFGS([p0,quat0],max_eval=maxiter,lr=lr, line_search_fn='strong_wolfe')
start = time.time()
print('performing optimization...')
opt.nit = -1
loss_dict = {}
loss_dict['A'] = [0]
loss_dict['E'] = [0]
loss_dict['E100'] = [0]
loss_dict['E50'] = [0]
loss_dict['E25'] = [0]
loss_dict['E12'] = [0]
def closure():
opt.nit += 1; it = opt.nit
optimizer.zero_grad()
(gamma,E100,E50,E25,E12,A,rotation_cost) = loss(quat0,p0,q0)
E = E100+4.*E50+16.*E25+64.*E12
L = gamma*E+A+0.0001*rotation_cost
L.backward(retain_graph=True) #
print("Iteration ",it,", Cost = ", L.data.view(-1).cpu().numpy()[0])
#print('Grad : ',quat0.grad)
#print('QUAT0 : ', quat0)
if(folder2save != ''):
if(opt.nit % 5 == 0):
loss_dict['A'].append(float(A.detach().cpu().numpy()))
loss_dict['E'].append(float(E.detach().cpu().numpy()))
loss_dict['E100'].append(float(E100.detach().cpu().numpy()))
loss_dict['E50'].append(float(E50.detach().cpu().numpy()))
loss_dict['E25'].append(float(E25.detach().cpu().numpy()))
loss_dict['E12'].append(float(E12.detach().cpu().numpy()))
return L
# Optpimisation using scipy : we need to transfer the data from variable to float64
def numpy_closure(vec):
vec = lr*vec.astype('float64')
numpy_to_model(quat0,vec[-7:])
numpy_to_model(p0,vec[:-7].astype('float64'))
c = closure().data.view(-1).cpu().numpy()[0]
return (c,dvec)
def model_to_numpy(p,quat, grad=False) :
if grad :
tensors = quat.grad.data.view(-1).cpu().numpy()
p_tensors = p.grad.data.view(-1).cpu().numpy()
else :
tensors = quat.data.view(-1).cpu().numpy()
p_tensors = p.data.view(-1).cpu().numpy()
tensor = np.ascontiguousarray( np.hstack((p_tensors,tensors)) , dtype='float64' )
return tensor
def numpy_to_model(torch_obj, np_obj) :
torch_obj.data = torch.from_numpy(np_obj).view(torch_obj.data.size()).type(torch_obj.data.type())
#pdb.set_trace()
if use_scipy :
res = minimize( numpy_closure, # function to minimize
model_to_numpy(p0,quat0), # starting estimate
method = method,
jac = True, # matching_problems also returns the gradient
options = options )
print(res.message)
else :
for i in range(int(maxiter/20)+1): # Fixed number of iterations
optimizer.step(closure) # "Gradient descent" step.
total_time = round(time.time()-start,2)
print('Optimization time : ',total_time,' seconds')
if(folder2save != ''):
try:
os.mkdir(folder2save)
except OSError:
pass
with open(folder2save+'/dict_'+savename+'.pkl','wb') as f:
pickle.dump(loss_dict,f)
return (quat0,p0,opt.nit,total_time)
def rigid_opt(loss, quat0, q0, maxiter = 100,folder2save = '',savename = ''):
lr = params_opt["lr"]
maxcor = params_opt["maxcor"]
gtol = params_opt["gtol"]
tol = params_opt["tol"]
use_scipy = params_opt["use_scipy"] #If use_scipy : perform otpimization with LBFGS on scipy.
method = params_opt["method"]
options = dict( maxiter = maxiter,
ftol = tol,
gtol = gtol,
maxcor = maxcor # Number of previous gradients used to approximate the Hessian
)
optimizer = torch.optim.LBFGS([quat0], max_eval=20, lr=lr, line_search_fn='strong_wolfe')
start = time.time()
print('performing optimization...')
opt.nit = -1
loss_dict = {}
loss_dict['L'] = [0]
def closure():
opt.nit += 1; it = opt.nit
optimizer.zero_grad()
L = loss(quat0, q0)
L.backward(retain_graph=True) #
print("Iteration ",it,", Cost = ", L.data.view(-1).cpu().numpy()[0])
if(folder2save != ''):
if(opt.nit % 5 == 0):
loss_dict['L'].append(float(L.detach().cpu().numpy()))
return L
# Optpimisation using scipy : we need to transfer the data from variable to float64
def numpy_closure(vec):
vec = lr*vec.astype('float64')
numpy_to_model(quat0,vec)
c = closure().data.view(-1).cpu().numpy()[0]
dvec = model_to_numpy(quat0,grad = True)
return (c,dvec)
def model_to_numpy(p, grad=False) :
if grad :
tensors = p.grad.data.view(-1).cpu().numpy()
else :
tensors = p.data.view(-1).cpu().numpy()
return np.ascontiguousarray( np.hstack(tensors) , dtype='float64' )
def numpy_to_model(torch_obj, np_obj) :
torch_obj.data = torch.from_numpy(np_obj).view(torch_obj.data.size()).type(torch_obj.data.type())
#pdb.set_trace()
if use_scipy :
res = minimize( numpy_closure, # function to minimize
model_to_numpy(quat0), # starting estimate
method = method,
jac = True, # matching_problems also returns the gradient
options = options )
print(res.message)
else :
for i in range(int(maxiter)): # Fixed number of iterations
optimizer.step(closure) # "Gradient descent" step.
total_time = round(time.time()-start,2)
print('Optimization time : ',total_time,' seconds')
if(folder2save != ''):
try:
os.mkdir(folder2save)
except OSError:
pass
with open(folder2save+'/dict_'+savename+'.pkl','wb') as f:
pickle.dump(loss_dict,f)
return (quat0,opt.nit,total_time)
| 35.044803 | 141 | 0.55684 | 2,393 | 19,555 | 4.424154 | 0.100293 | 0.031454 | 0.019552 | 0.02607 | 0.839142 | 0.812506 | 0.807122 | 0.797676 | 0.784075 | 0.777935 | 0 | 0.023394 | 0.30923 | 19,555 | 557 | 142 | 35.10772 | 0.760364 | 0.143748 | 0 | 0.763547 | 0 | 0 | 0.058568 | 0 | 0 | 0 | 0 | 0 | 0.002463 | 1 | 0.073892 | false | 0.009852 | 0.022167 | 0 | 0.155172 | 0.059113 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d9b2156c6d2b2d63ec52baec9fd4c47f533f1eef | 10,933 | py | Python | posthog/api/test/test_preflight.py | thinhnguyenuit/posthog | 4758e66790485587d29a617174158d07341342f8 | [
"MIT"
] | null | null | null | posthog/api/test/test_preflight.py | thinhnguyenuit/posthog | 4758e66790485587d29a617174158d07341342f8 | [
"MIT"
] | null | null | null | posthog/api/test/test_preflight.py | thinhnguyenuit/posthog | 4758e66790485587d29a617174158d07341342f8 | [
"MIT"
] | null | null | null | from typing import cast
import pytest
from django.conf import settings
from django.utils import timezone
from rest_framework import status
from posthog.constants import RDBMS
from posthog.models.organization import Organization, OrganizationInvite
from posthog.test.base import APIBaseTest
from posthog.version import VERSION
class TestPreflight(APIBaseTest):
def test_preflight_request_unauthenticated(self):
"""
For security purposes, the information contained in an unauthenticated preflight request is minimal.
"""
self.client.logout()
with self.settings(MULTI_TENANCY=False):
response = self.client.get("/_preflight/")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.json(),
{
"django": True,
"redis": True,
"plugins": True,
"celery": True,
"db": True,
"initiated": True,
"cloud": False,
"realm": "hosted",
"available_social_auth_providers": {
"google-oauth2": False,
"github": False,
"gitlab": False,
"saml": False,
},
"can_create_org": False,
},
)
def test_preflight_request(self):
with self.settings(MULTI_TENANCY=False):
response = self.client.get("/_preflight/")
self.assertEqual(response.status_code, status.HTTP_200_OK)
response = response.json()
available_timezones = cast(dict, response).pop("available_timezones")
self.assertEqual(
response,
{
"django": True,
"redis": True,
"plugins": True,
"celery": True,
"db": True,
"initiated": True,
"cloud": False,
"realm": "hosted",
"ee_available": settings.EE_AVAILABLE,
"is_clickhouse_enabled": False,
"db_backend": "postgres",
"available_social_auth_providers": {
"google-oauth2": False,
"github": False,
"gitlab": False,
"saml": False,
},
"opt_out_capture": False,
"posthog_version": VERSION,
"email_service_available": False,
"is_debug": False,
"is_event_property_usage_enabled": False,
"licensed_users_available": None,
"site_url": "http://localhost:8000",
"can_create_org": False,
},
)
self.assertDictContainsSubset({"Europe/Moscow": 3, "UTC": 0}, available_timezones)
@pytest.mark.ee
def test_cloud_preflight_request_unauthenticated(self):
self.client.logout() # make sure it works anonymously
with self.settings(MULTI_TENANCY=True, PRIMARY_DB=RDBMS.CLICKHOUSE):
response = self.client.get("/_preflight/")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.json(),
{
"django": True,
"redis": True,
"plugins": True,
"celery": True,
"db": True,
"initiated": True,
"cloud": True,
"realm": "cloud",
"available_social_auth_providers": {
"google-oauth2": False,
"github": False,
"gitlab": False,
"saml": False,
},
"can_create_org": True,
},
)
@pytest.mark.ee
def test_cloud_preflight_request(self):
with self.settings(MULTI_TENANCY=True, PRIMARY_DB=RDBMS.CLICKHOUSE, SITE_URL="https://app.posthog.com"):
response = self.client.get("/_preflight/")
self.assertEqual(response.status_code, status.HTTP_200_OK)
response = response.json()
available_timezones = cast(dict, response).pop("available_timezones")
self.assertEqual(
response,
{
"django": True,
"redis": True,
"plugins": True,
"celery": True,
"db": True,
"initiated": True,
"cloud": True,
"realm": "cloud",
"ee_available": True,
"is_clickhouse_enabled": True,
"db_backend": "clickhouse",
"available_social_auth_providers": {
"google-oauth2": False,
"github": False,
"gitlab": False,
"saml": False,
},
"opt_out_capture": False,
"posthog_version": VERSION,
"email_service_available": False,
"is_debug": False,
"is_event_property_usage_enabled": False,
"licensed_users_available": None,
"site_url": "https://app.posthog.com",
"can_create_org": True,
},
)
self.assertDictContainsSubset({"Europe/Moscow": 3, "UTC": 0}, available_timezones)
@pytest.mark.ee
def test_cloud_preflight_request_with_social_auth_providers(self):
with self.settings(
SOCIAL_AUTH_GOOGLE_OAUTH2_KEY="test_key",
SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET="test_secret",
MULTI_TENANCY=True,
EMAIL_HOST="localhost",
PRIMARY_DB=RDBMS.CLICKHOUSE,
):
response = self.client.get("/_preflight/")
self.assertEqual(response.status_code, status.HTTP_200_OK)
response = response.json()
available_timezones = cast(dict, response).pop("available_timezones")
self.assertEqual(
response,
{
"django": True,
"redis": True,
"plugins": True,
"celery": True,
"db": True,
"initiated": True,
"cloud": True,
"realm": "cloud",
"ee_available": True,
"is_clickhouse_enabled": True,
"db_backend": "clickhouse",
"available_social_auth_providers": {
"google-oauth2": True,
"github": False,
"gitlab": False,
"saml": False,
},
"opt_out_capture": False,
"posthog_version": VERSION,
"email_service_available": True,
"is_debug": False,
"is_event_property_usage_enabled": False,
"licensed_users_available": None,
"site_url": "http://localhost:8000",
"can_create_org": True,
},
)
self.assertDictContainsSubset({"Europe/Moscow": 3, "UTC": 0}, available_timezones)
@pytest.mark.ee
@pytest.mark.skip_on_multitenancy
def test_ee_preflight_with_saml(self):
from ee.models.license import License, LicenseManager
super(LicenseManager, cast(LicenseManager, License.objects)).create(
key="key_123", plan="enterprise", valid_until=timezone.datetime(2038, 1, 19, 3, 14, 7),
)
self.client.logout() # make sure it works anonymously
with self.settings(PRIMARY_DB=RDBMS.CLICKHOUSE, SAML_CONFIGURED=True):
response = self.client.get("/_preflight/")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.json(),
{
"django": True,
"redis": True,
"plugins": True,
"celery": True,
"db": True,
"initiated": True,
"cloud": False,
"realm": "hosted-clickhouse",
"available_social_auth_providers": {
"google-oauth2": False,
"github": False,
"gitlab": False,
"saml": True,
},
"can_create_org": False,
},
)
@pytest.mark.ee
@pytest.mark.skip_on_multitenancy
def test_ee_preflight_with_users_limit(self):
from ee.models.license import License, LicenseManager
super(LicenseManager, cast(LicenseManager, License.objects)).create(
key="key_123", plan="free_clickhouse", valid_until=timezone.datetime(2038, 1, 19, 3, 14, 7), max_users=3,
)
OrganizationInvite.objects.create(organization=self.organization, target_email="invite@posthog.com")
response = self.client.get("/_preflight/")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()["licensed_users_available"], 1)
self.assertEqual(response.json()["can_create_org"], False)
def test_can_create_org_in_fresh_instance(self):
Organization.objects.all().delete()
response = self.client.get("/_preflight/")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()["can_create_org"], True)
@pytest.mark.ee
@pytest.mark.skip_on_multitenancy
def test_can_create_org_with_multi_org(self):
# First with no license
with self.settings(MULTI_ORG_ENABLED=True):
response = self.client.get("/_preflight/")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()["can_create_org"], False)
# Now with proper license
from ee.models.license import License, LicenseManager
super(LicenseManager, cast(LicenseManager, License.objects)).create(
key="key_123", plan="enterprise", valid_until=timezone.datetime(2038, 1, 19, 3, 14, 7), max_users=3,
)
with self.settings(MULTI_ORG_ENABLED=True):
response = self.client.get("/_preflight/")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.json()["can_create_org"], True)
| 38.907473 | 117 | 0.514314 | 965 | 10,933 | 5.594819 | 0.15544 | 0.058344 | 0.089461 | 0.038896 | 0.816447 | 0.812002 | 0.802371 | 0.80163 | 0.7807 | 0.7807 | 0 | 0.014364 | 0.382329 | 10,933 | 280 | 118 | 39.046429 | 0.785133 | 0.019116 | 0 | 0.711934 | 0 | 0 | 0.166589 | 0.047396 | 0 | 0 | 0 | 0 | 0.098765 | 1 | 0.037037 | false | 0 | 0.049383 | 0 | 0.090535 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d9c5265f8fb43ec371e5d6f066b43189a8a7e75c | 209 | py | Python | src/frontpages/__init__.py | mpflynnx/frontpages | 9ca014b8bddc767c8f70e4731183c4912370eb65 | [
"MIT"
] | null | null | null | src/frontpages/__init__.py | mpflynnx/frontpages | 9ca014b8bddc767c8f70e4731183c4912370eb65 | [
"MIT"
] | 2 | 2022-01-06T11:02:18.000Z | 2022-02-08T07:17:09.000Z | src/frontpages/__init__.py | mpflynnx/frontpages | 9ca014b8bddc767c8f70e4731183c4912370eb65 | [
"MIT"
] | null | null | null | """Top-level package for frontpages."""
__version__ = "0.3.0"
from .file_functions import * # noqa
from .image_functions import * # noqa
from .main import main # noqa
from .web_functions import * # noqa
| 23.222222 | 39 | 0.708134 | 29 | 209 | 4.862069 | 0.551724 | 0.319149 | 0.404255 | 0.326241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017442 | 0.177033 | 209 | 8 | 40 | 26.125 | 0.802326 | 0.258373 | 0 | 0 | 0 | 0 | 0.034247 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8a04991f74830a3f3c97e9b4aaad8e1083118338 | 8,353 | py | Python | scripts/calendar_view_gui/utils/lut.py | CsabaWirnhardt/cbm | 1822addd72881057af34ac6a7c2a1f02ea511225 | [
"BSD-3-Clause"
] | 17 | 2021-01-18T07:27:01.000Z | 2022-03-10T12:26:21.000Z | scripts/calendar_view_gui/utils/lut.py | CsabaWirnhardt/cbm | 1822addd72881057af34ac6a7c2a1f02ea511225 | [
"BSD-3-Clause"
] | 4 | 2021-04-29T11:20:44.000Z | 2021-12-06T10:19:17.000Z | scripts/calendar_view_gui/utils/lut.py | CsabaWirnhardt/cbm | 1822addd72881057af34ac6a7c2a1f02ea511225 | [
"BSD-3-Clause"
] | 47 | 2021-01-21T08:25:22.000Z | 2022-03-21T14:28:42.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# This file is part of CbM (https://github.com/ec-jrc/cbm).
# Author : Csaba Wirnhardt
# Credits : GTCAP Team
# Copyright : 2021 European Commission, Joint Research Centre
# License : 3-Clause BSD
from osgeo import gdal
import sys
def getCumulativeCutCountForOneBand(tifFileName, band, leftPercent, rightPercent):
src_ds = gdal.Open( tifFileName )
if src_ds is None:
print("Unable to open:", tifFileName)
sys.exit(1)
srcband = src_ds.GetRasterBand(band)
(min,max) = srcband.ComputeRasterMinMax(band)
# srcband.SetNoDataValue(noData)
# stats = band.GetStatistics(approx, 1)
# print(min)
# print(max)
if min == max:
lowerLimit = 0
upperLimit = 0
return(lowerLimit, upperLimit)
else:
dfMin = float(0.5)
dfMax = float(max+0.5)
# print(dfMin, dfMax)
nBuckets = int(dfMax-dfMin)
#bIncludeOutOfRange : if TRUE values below the histogram range will mapped into panHistogram[0],
#and values above will be mapped into panHistogram[nBuckets-1] otherwise out of range values are discarded
ioor = 0
force = 1
approxok = 0
hist = srcband.GetHistogram( dfMin, dfMax, nBuckets, ioor, approxok )
totalNumberOfPixels = sum(hist)
leftTarget = int(round(totalNumberOfPixels * (leftPercent/100),0))
rightTarget =int(round(totalNumberOfPixels - ( totalNumberOfPixels * (rightPercent/100)),0))
# let's go through the list of histogram values from the right and see when we
# reach left and right percentages
i = 0
cumulativeHist = 0
for entry in hist:
cumulativeHist += entry
# print(i,cumulativeHist)
if leftTarget >= cumulativeHist - entry and leftTarget < cumulativeHist:
lowerLimit = i
if rightTarget >= cumulativeHist - entry and rightTarget < cumulativeHist:
upperLimit = i
i += 1
# print("totalNumberOfPixels:", totalNumberOfPixels)
# print("leftTarget:",leftTarget)
# print("rightTarget:",rightTarget)
# print("lowerLimit:",lowerLimit)
# print("upperLimit:",upperLimit)
# print(leftPercent/100)
return(lowerLimit, upperLimit)
def lutStretch(tifFileName, output, leftPercent, rightPercent, bands ):
lutDict = {}
for band in bands:
(lowerLimit, upperLimit) = getCumulativeCutCountForOneBand(tifFileName,band, leftPercent, rightPercent)
lutDict[band] = (lowerLimit, upperLimit)
rmin = lutDict[bands[0]][0]
rmax = lutDict[bands[0]][1]
gmin = lutDict[bands[1]][0]
gmax = lutDict[bands[1]][1]
bmin = lutDict[bands[2]][0]
bmax = lutDict[bands[2]][1]
# scaleParams --- list of scale parameters, each of the form [src_min,src_max] or [src_min,src_max,dst_min,dst_max]
ds = gdal.Open(tifFileName)
# ds = gdal.Translate(output, ds, scaleParams = [[rmin, rmax, 0, 255], [gmin, gmax, 0, 255], [bmin, bmax, 0, 255]], bandList = [4,2,1], outputType = gdal.GDT_Byte)
ds = gdal.Translate(output, ds, scaleParams = [[rmin, rmax, 0, 255], [gmin, gmax, 0, 255], [bmin, bmax, 0, 255]], bandList = bands, outputType = gdal.GDT_Byte)
ds = None
def writeMinMaxToFile(tifFileName, acqDate, bands, leftPercent, rightPercent, lutTxtFile, tile_name):
fout = open(lutTxtFile, 'a')
lutDict = {}
for band in bands:
(lowerLimit, upperLimit) = getCumulativeCutCountForOneBand(tifFileName, band, leftPercent, rightPercent)
lutDict[band] = (lowerLimit, upperLimit)
rmin = lutDict[bands[0]][0]
rmax = lutDict[bands[0]][1]
gmin = lutDict[bands[1]][0]
gmax = lutDict[bands[1]][1]
bmin = lutDict[bands[2]][0]
bmax = lutDict[bands[2]][1]
print(acqDate, tile_name, leftPercent, rightPercent, rmin,rmax,gmin,gmax,bmin,bmax, file=fout)
def lutStretchMagicLut(tifFileName, output, bands ):
#B08#1200#5700,
#B11#800#4100,
#B04#150#2800
rmin = 1200
rmax = 5700
gmin = 800
gmax = 4100
bmin = 150
bmax = 2800
# scaleParams --- list of scale parameters, each of the form [src_min,src_max] or [src_min,src_max,dst_min,dst_max]
ds = gdal.Open(tifFileName)
# ds = gdal.Translate(output, ds, scaleParams = [[rmin, rmax, 0, 255], [gmin, gmax, 0, 255], [bmin, bmax, 0, 255]], bandList = [4,2,1], outputType = gdal.GDT_Byte)
ds = gdal.Translate(output, ds, scaleParams = [[rmin, rmax, 0, 255], [gmin, gmax, 0, 255], [bmin, bmax, 0, 255]], bandList = bands, outputType = gdal.GDT_Byte)
ds = None
def get_cumulative_cut_count_for_one_band_float(tifFileName, leftPercent, rightPercent):
src_ds = gdal.Open( tifFileName )
if src_ds is None:
print("Unable to open:", tifFileName)
sys.exit(1)
srcband = src_ds.GetRasterBand(1)
# print(srcband)
(min,max) = srcband.ComputeRasterMinMax(1)
# srcband.SetNoDataValue(noData)
# stats = band.GetStatistics(approx, 1)
# print(min)
# print(max)
dfMin = float(0.5)
dfMax = float(max+0.5)
# print(dfMin, dfMax)
nBuckets = int(dfMax-dfMin)
#bIncludeOutOfRange : if TRUE values below the histogram range will mapped into panHistogram[0],
#and values above will be mapped into panHistogram[nBuckets-1] otherwise out of range values are discarded
ioor = 0
force = 1
approxok = 0
# hist = srcband.GetHistogram( dfMin, dfMax, nBuckets, ioor, approxok )
hist = srcband.GetHistogram()
# print(hist)
totalNumberOfPixels = sum(hist)
leftTarget = int(round(totalNumberOfPixels * (leftPercent/100),0))
rightTarget =int(round(totalNumberOfPixels - ( totalNumberOfPixels * (rightPercent/100)),0))
# let's go through the list of histogram values from the right and see when we
# reach left and right percentages
i = 0
cumulativeHist = 0
for entry in hist:
cumulativeHist += entry
# print(i,cumulativeHist)
if leftTarget >= cumulativeHist - entry and leftTarget < cumulativeHist:
lowerLimit = i
if rightTarget >= cumulativeHist - entry and rightTarget < cumulativeHist:
upperLimit = i
i += 1
# print("totalNumberOfPixels:", totalNumberOfPixels)
# print("leftTarget:",leftTarget)
# print("rightTarget:",rightTarget)
# print("lowerLimit:",lowerLimit)
# print("upperLimit:",upperLimit)
# print(leftPercent/100)
return(lowerLimit, upperLimit)
def lut_stretch_one_band_s1_bs(tifFileName, output, leftPercent, rightPercent):
band = 1
# (lowerLimit, upperLimit) = get_cumulative_cut_count_for_one_band_float(tifFileName, leftPercent, rightPercent)
(lowerLimit, upperLimit) = getCumulativeCutCountForOneBand(tifFileName, band, leftPercent, rightPercent)
if lowerLimit == 0 and upperLimit ==0:
print("no data in image file:", tifFileName)
return
else:
# print(lowerLimit, upperLimit)
# scaleParams --- list of scale parameters, each of the form [src_min,src_max] or [src_min,src_max,dst_min,dst_max]
ds = gdal.Open(tifFileName)
# ds = gdal.Translate(output, ds, scaleParams = [[rmin, rmax, 0, 255], [gmin, gmax, 0, 255], [bmin, bmax, 0, 255]], bandList = [4,2,1], outputType = gdal.GDT_Byte)
ds = gdal.Translate(output, ds, scaleParams = [[lowerLimit, upperLimit, 0, 255]], bandList = [1,], outputType = gdal.GDT_Byte)
ds = None
# tifFileName = "e:/chips/be_fl_for_s1_comparison_07/1__/s1_bs/20200402T055753_VH_x10000.tif"
# output = "e:/chips/be_fl_for_s1_comparison_07/1__/s1_bs/20200402T055753_VH_x10000_lut.tif"
# leftPercent = 1
# rightPercent = 4
# lut_stretch_one_band_s1_bs(tifFileName, output, leftPercent, rightPercent)
# tifFileName = "e:/MS/ES/Catalunia2019/raster/chips/343130_wheat_merged/s2_2019-05-06.tif"
# output = "e:/MS/ES/Catalunia2019/raster/chips/343130_wheat_merged_lut/s2_2019-05-06.tif"
# bands=[1,2,3]
# acqDate = "2019-05-06"
# lutTxtFile = "e:/MS/ES/Catalunia2019/raster/chips/lutTxt/lut.txt"
# writeMinMaxToFile(tifFileName, acqDate, bands, leftPercent, rightPercent, lutTxtFile)
# # lutStretch(tifFileName, output, leftPercent, rightPercent, bands ) | 39.966507 | 171 | 0.661559 | 1,003 | 8,353 | 5.425723 | 0.198405 | 0.01176 | 0.009923 | 0.01323 | 0.840316 | 0.827086 | 0.796398 | 0.753216 | 0.753216 | 0.73631 | 0 | 0.048732 | 0.221238 | 8,353 | 209 | 172 | 39.966507 | 0.787856 | 0.403927 | 0 | 0.722222 | 0 | 0 | 0.010799 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.018519 | 0 | 0.083333 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8aa43ae0441b1189383b658fa8fac0bf5ca3c33a | 7,871 | py | Python | tests/test_rest.py | oarepo/invenio-records-draft | 6d77309996c58fde7731e5f182e9cd5400f81f14 | [
"MIT"
] | 1 | 2020-06-03T14:44:49.000Z | 2020-06-03T14:44:49.000Z | tests/test_rest.py | oarepo/invenio-records-draft | 6d77309996c58fde7731e5f182e9cd5400f81f14 | [
"MIT"
] | 7 | 2020-06-02T14:45:48.000Z | 2021-11-16T08:38:47.000Z | tests/test_rest.py | oarepo/invenio-records-draft | 6d77309996c58fde7731e5f182e9cd5400f81f14 | [
"MIT"
] | 1 | 2019-08-15T07:59:48.000Z | 2019-08-15T07:59:48.000Z | import json
from tests.helpers import remove_ts
def test_draft_create(app, db, client, prepare_es, test_users):
resp = client.post('/draft/records/', data=json.dumps({'title': 'test'}), content_type='application/json')
assert resp.status_code == 201
resp = resp.json
assert remove_ts(resp) == {
"id": "1",
"links": {
"self": "http://localhost:5000/draft/records/1",
'files': 'http://localhost:5000/draft/records/1/files/'
},
"metadata": {
"$schema": "https://localhost:5000/schemas/sample/sample-v1.0.0.json",
"control_number": "1",
"oarepo:validity": {
"errors": {
"marshmallow": [{
"field": "title",
"message": "Shorter than minimum length 5."
}]
},
"valid": False
},
'oarepo:draft': True,
"title": "test"
},
"revision": 0
}
resp = client.get('/draft/records/')
assert resp.status_code == 200
resp = resp.json
assert remove_ts(resp) == {
'aggregations': {},
'hits': {
'hits': [
{
'id': '1',
'links': {
'self': 'http://localhost:5000/draft/records/1',
'files': 'http://localhost:5000/draft/records/1/files/'
},
'metadata': {
'$schema': 'https://localhost:5000/schemas/sample/sample-v1.0.0.json',
'control_number': '1',
'oarepo:validity': {
'errors': {
'marshmallow': [
{
'field': 'title',
'message': 'Shorter than minimum length 5.'
}]},
'valid': False
},
'oarepo:draft': True,
'title': 'test'},
'revision': 0,
}],
'total': 1},
'links': {'self': 'http://localhost:5000/draft/records/?size=10&page=1'}}
print('before patch')
resp = client.patch('/draft/records/1',
data=json.dumps([{'op': 'replace', 'path': '/title', 'value': 'longer test'}]),
content_type='application/json-patch+json')
assert resp.status_code == 200
resp = client.get('/draft/records/')
assert resp.status_code == 200
resp = resp.json
assert remove_ts(resp) == {
'aggregations': {},
'hits': {
'hits': [
{
'id': '1',
'links': {
'self': 'http://localhost:5000/draft/records/1',
'files': 'http://localhost:5000/draft/records/1/files/'
},
'metadata': {
'$schema': 'https://localhost:5000/schemas/sample/sample-v1.0.0.json',
'control_number': '1',
'oarepo:validity': {
'valid': True
},
'oarepo:draft': True,
'title': 'longer test'
},
'revision': 1,
}],
'total': 1},
'links': {'self': 'http://localhost:5000/draft/records/?size=10&page=1'}}
resp = client.post('/draft/records/1/publish')
assert resp.status_code == 401
# login first user
resp = client.post('/test/login/1')
assert resp.status_code == 200
# this user can publish the record ...
resp = client.get('/draft/records/1')
assert resp.status_code == 200
resp = resp.json
assert remove_ts(resp) == {
'id': '1',
'links': {
'self': 'http://localhost:5000/draft/records/1',
'publish': 'http://localhost:5000/draft/records/1/publish',
'files': 'http://localhost:5000/draft/records/1/files/'
},
'metadata': {
'$schema': 'https://localhost:5000/schemas/sample/sample-v1.0.0.json',
'control_number': '1',
'oarepo:validity': {
'valid': True
},
'oarepo:draft': True,
'title': 'longer test'
},
'revision': 1,
}
resp = client.post('/draft/records/1/publish')
assert resp.status_code == 302
print(resp.data)
print(resp.headers)
assert resp.headers['Location'] == 'http://localhost:5000/records/1'
resp = resp.json
assert resp == {
"links": {
"published": "http://localhost:5000/records/1"
},
"status": "ok"
}
resp = client.get('/draft/records/1')
assert resp.status_code == 410
resp = client.get('/records/1')
assert resp.status_code == 200
# edit the record - at first, no permissions
resp = client.post('/test/logout')
assert resp.status_code == 200
resp = client.post('/records/1/edit')
assert resp.status_code == 401
# login first user
resp = client.post('/test/login/1')
assert resp.status_code == 200
resp = client.post('/records/1/edit')
assert resp.status_code == 302
assert resp.headers['Location'] == 'http://localhost:5000/draft/records/1'
resp = resp.json
assert resp == {
"links": {
"draft": "http://localhost:5000/draft/records/1"
}
}
# record still exists during edit
resp = client.get('/records/1')
assert resp.status_code == 200
# patch it
resp = client.patch('/draft/records/1',
data=json.dumps([{'op': 'replace', 'path': '/title', 'value': 'longer test edit'}]),
content_type='application/json-patch+json')
assert resp.status_code == 200
# and publish again
resp = client.post('/draft/records/1/publish')
assert resp.status_code == 302
print(resp.data)
print(resp.headers)
assert resp.headers['Location'] == 'http://localhost:5000/records/1'
resp = resp.json
assert resp == {
"links": {
"published": "http://localhost:5000/records/1"
},
"status": "ok"
}
# unpublish the record - at first, no permissions
resp = client.post('/test/logout')
assert resp.status_code == 200
resp = client.post('/records/1/unpublish')
assert resp.status_code == 401
# login first user
resp = client.post('/test/login/1')
assert resp.status_code == 200
resp = client.post('/records/1/unpublish')
assert resp.status_code == 302
assert resp.headers['Location'] == 'http://localhost:5000/draft/records/1'
resp = resp.json
assert resp == {
"links": {
"draft": "http://localhost:5000/draft/records/1"
},
"status": "ok"
}
# record does not exist during edit
resp = client.get('/records/1')
assert resp.status_code == 410
# patch it
resp = client.patch('/draft/records/1',
data=json.dumps([{'op': 'replace', 'path': '/title', 'value': 'longer test edit'}]),
content_type='application/json-patch+json')
assert resp.status_code == 200
# and publish again
resp = client.post('/draft/records/1/publish')
assert resp.status_code == 302
print(resp.data)
print(resp.headers)
assert resp.headers['Location'] == 'http://localhost:5000/records/1'
resp = resp.json
assert resp == {
"links": {
"published": "http://localhost:5000/records/1"
},
"status": "ok"
}
| 31.866397 | 110 | 0.489773 | 792 | 7,871 | 4.815657 | 0.136364 | 0.073414 | 0.100682 | 0.125852 | 0.945726 | 0.9258 | 0.9258 | 0.91269 | 0.91269 | 0.91269 | 0 | 0.04805 | 0.354847 | 7,871 | 246 | 111 | 31.995935 | 0.703033 | 0.037861 | 0 | 0.713568 | 0 | 0 | 0.314947 | 0.023413 | 0 | 0 | 0 | 0 | 0.190955 | 1 | 0.005025 | false | 0 | 0.01005 | 0 | 0.015075 | 0.035176 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0a374ded4f46661badeb3b76ba55e6d1b5d53945 | 1,442 | py | Python | samples/migrateADCGen1/mappers/adlsg1.py | daniel-dqsdatalabs/pyapacheatlas | 7fbc0ae3b3c661db07a443306995d4c416a01e1a | [
"MIT"
] | 104 | 2020-12-07T14:18:20.000Z | 2022-03-16T12:11:21.000Z | samples/migrateADCGen1/mappers/adlsg1.py | daniel-dqsdatalabs/pyapacheatlas | 7fbc0ae3b3c661db07a443306995d4c416a01e1a | [
"MIT"
] | 98 | 2020-12-23T20:27:02.000Z | 2022-03-10T15:44:43.000Z | samples/migrateADCGen1/mappers/adlsg1.py | daniel-dqsdatalabs/pyapacheatlas | 7fbc0ae3b3c661db07a443306995d4c416a01e1a | [
"MIT"
] | 47 | 2020-12-17T16:28:31.000Z | 2022-02-22T03:12:19.000Z | import sys
sys.path.append("./")
from .assetmapper import AssetMapper
from urllib.parse import urlparse
class ADLSGen1Directory(AssetMapper):
def __init__(self, asset, termMap, typeName='azure_datalake_gen1_path', columnTypeName='column'):
super().__init__(asset, termMap, typeName=typeName, columnTypeName=columnTypeName)
def qualified_name(self):
url = self.asset["properties"].get("dsl", {}).get("address", {}).get("url", None)
parsed = urlparse(url)
url = parsed.geturl().replace("https", "adl", 1)
return f"{url}"
def column_qualified_name_pattern(self, columnName, **kwargs):
return columnName
# Override
def partial_column_updates(self):
return []
class ADLSGen1DataLake(AssetMapper):
def __init__(self, asset, termMap, typeName='azure_datalake_gen1_path', columnTypeName='column'):
super().__init__(asset, termMap, typeName=typeName, columnTypeName=columnTypeName)
def qualified_name(self):
url = self.asset["properties"].get("dsl", {}).get("address", {}).get("url", None)
parsed = urlparse(url)
url = parsed.geturl().replace("https", "adl", 1)
if url[-1] == "/":
url = url[:-1]
return f"{url}"
def column_qualified_name_pattern(self, columnName, **kwargs):
return columnName
# Override
def partial_column_updates(self):
return []
| 32.772727 | 101 | 0.647018 | 156 | 1,442 | 5.762821 | 0.301282 | 0.040044 | 0.088988 | 0.048943 | 0.846496 | 0.846496 | 0.846496 | 0.846496 | 0.846496 | 0.846496 | 0 | 0.007036 | 0.211512 | 1,442 | 43 | 102 | 33.534884 | 0.783641 | 0.011789 | 0 | 0.733333 | 0 | 0 | 0.094937 | 0.033755 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.1 | 0.133333 | 0.633333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 10 |
0a3bccef5066859e173f708a937a9aa35934b72b | 72 | py | Python | tfprob/gan/__init__.py | AlexBlack2202/EigenGAN-Tensorflow | 9668738852abdcd7161b64b7e6a074c7ebfea055 | [
"MIT"
] | 581 | 2018-05-06T05:15:05.000Z | 2022-03-29T08:13:54.000Z | tfprob/gan/__init__.py | yaojia1/darknet_my | 92906e6b32cdcabaa841461c6d2efe06a54057d1 | [
"MIT"
] | 52 | 2018-05-11T09:33:30.000Z | 2022-03-24T04:27:07.000Z | tfprob/gan/__init__.py | yaojia1/darknet_my | 92906e6b32cdcabaa841461c6d2efe06a54057d1 | [
"MIT"
] | 137 | 2018-05-08T14:30:03.000Z | 2022-02-24T01:50:37.000Z | from tfprob.gan.gradient_penalty import *
from tfprob.gan.loss import *
| 24 | 41 | 0.805556 | 11 | 72 | 5.181818 | 0.636364 | 0.350877 | 0.45614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 72 | 2 | 42 | 36 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0a44842007cc96c54c93eeda91f55adbe4d15833 | 6,120 | py | Python | PostCruiseRoutines/io_utils/EcoFOCI_db_io.py | NOAA-PMEL/AtSeaPrograms | 839ee4dc1cf7a85bce1de82b04379c6d1670c414 | [
"MIT"
] | 1 | 2017-03-07T19:35:35.000Z | 2017-03-07T19:35:35.000Z | io_utils/EcoFOCI_db_io.py | NOAA-PMEL/AtSeaPrograms | 839ee4dc1cf7a85bce1de82b04379c6d1670c414 | [
"MIT"
] | 2 | 2017-04-13T17:13:13.000Z | 2017-04-13T17:14:11.000Z | io_utils/EcoFOCI_db_io.py | shaunwbell/AtSeaPrograms | 839ee4dc1cf7a85bce1de82b04379c6d1670c414 | [
"MIT"
] | 1 | 2020-01-15T06:22:23.000Z | 2020-01-15T06:22:23.000Z | #!/usr/bin/env python
"""
Background:
--------
EcoFOCI_db_io.py
Purpose:
--------
Various Routines and Classes to interface with the mysql database that houses EcoFOCI meta data
History:
--------
"""
import pymysql
import ConfigParserLocal
import datetime
__author__ = 'Shaun Bell'
__email__ = 'shaun.bell@noaa.gov'
__created__ = datetime.datetime(2014, 01, 29)
__modified__ = datetime.datetime(2016, 8, 10)
__version__ = "0.1.0"
__status__ = "Development"
__keywords__ = 'netCDF','meta','header'
class EcoFOCI_db_Moorings(object):
"""Class definitions to access EcoFOCI Mooring Database"""
def connect_to_DB(self, db_config_file=None):
"""Try to establish database connection
Parameters
----------
db_config_file : str
full path to json formatted database config file
"""
self.db_config = ConfigParserLocal.get_config(db_config_file)
try:
self.db = pymysql.connect(self.db_config['host'],
self.db_config['user'],
self.db_config['password'],
self.db_config['database'],
self.db_config['port'])
print "connected"
except:
print "db error"
# prepare a cursor object using cursor() method
self.cursor = self.db.cursor(pymysql.cursors.DictCursor)
return(self.db,self.cursor)
def manual_connect_to_DB(self, host='localhost', user='viewer',
password=None, database='ecofoci', port=3306):
"""Try to establish database connection
Parameters
----------
host : str
ip or domain name of host
user : str
account user
password : str
account password
database : str
database name to connect to
port : int
database port
"""
self.db_config['host'] = host
self.db_config['user'] = user
self.db_config['password'] = password
self.db_config['database'] = database
self.db_config['port'] = port
try:
self.db = pymysql.connect(self.db_config['host'],
self.db_config['user'],
self.db_config['password'],
self.db_config['database'],
self.db_config['port'])
print "connected manually"
except:
print "db error"
# prepare a cursor object using cursor() method
self.cursor = self.db.cursor(pymysql.cursors.DictCursor)
return(self.db,self.cursor)
def read_mooring(self, table=None, MooringID=None, verbose=False):
sql = ("SELECT * from `{0}` WHERE `MooringID`= '{1}'").format(table, MooringID)
if verbose:
print sql
result_dic = {}
try:
# Execute the SQL command
self.cursor.execute(sql)
# Get column names
rowid = {}
counter = 0
for i in self.cursor.description:
rowid[i[0]] = counter
counter = counter +1
#print rowid
# Fetch all the rows in a list of lists.
results = self.cursor.fetchall()
for row in results:
result_dic[row['MooringID']] ={keys: row[keys] for val, keys in enumerate(row.keys())}
return (result_dic)
except:
print "Error: unable to fecth data"
def close(self):
"""close database"""
self.db.close()
class EcoFOCI_db_Cruises(object):
"""Class definitions to access EcoFOCI Cruise/CTD Database"""
def connect_to_DB(self, db_config_file=None):
"""Try to establish database connection
Parameters
----------
db_config_file : str
full path to json formatted database config file
"""
self.db_config = ConfigParserLocal.get_config(db_config_file)
try:
self.db = pymysql.connect(self.db_config['host'],
self.db_config['user'],
self.db_config['password'],
self.db_config['database'],
self.db_config['port'])
except:
print "db error"
# prepare a cursor object using cursor() method
self.cursor = self.db.cursor(pymysql.cursors.DictCursor)
return(self.db,self.cursor)
def manual_connect_to_DB(self, host='localhost', user='viewer',
password=None, database='ecofoci', port=3306):
"""Try to establish database connection
Parameters
----------
host : str
ip or domain name of host
user : str
account user
password : str
account password
database : str
database name to connect to
port : int
database port
"""
self.db_config['host'] = host
self.db_config['user'] = user
self.db_config['password'] = password
self.db_config['database'] = database
self.db_config['port'] = port
try:
self.db = pymysql.connect(self.db_config['host'],
self.db_config['user'],
self.db_config['password'],
self.db_config['database'],
self.db_config['port'])
except:
print "db error"
# prepare a cursor object using cursor() method
self.cursor = self.db.cursor(pymysql.cursors.DictCursor)
return(self.db,self.cursor)
def read_cruisecastlogs(self, table=None, verbose=False, **kwargs):
if 'UniqueCruiseID' in kwargs.keys():
sql = ("SELECT * from `{0}` WHERE `UniqueCruiseID`= '{1}'").format(table, kwargs['UniqueCruiseID'])
elif 'CruiseID' in kwargs.keys():
sql = ("SELECT * from `{0}` WHERE `CruiseID`= '{1}'").format(table, kwargs['CruiseID'])
else:
raise DBVariableNamingError("UniqueCruiseID or CruiseID must be in specified as keyword-value pair")
if verbose:
print sql
result_dic = {}
try:
# Execute the SQL command
self.cursor.execute(sql)
# Get column names
rowid = {}
counter = 0
for i in self.cursor.description:
rowid[i[0]] = counter
counter = counter +1
#print rowid
# Fetch all the rows in a list of lists.
results = self.cursor.fetchall()
for row in results:
result_dic[row['ConsecutiveCastNo']] ={keys: row[keys] for val, keys in enumerate(row.keys())}
return (result_dic)
except:
print "Error: unable to fecth data"
def close(self):
"""close database"""
self.db.close()
class DBVariableNamingError(Exception):
"""Raise for kwargs that are not in the database as column/variable names""" | 26.960352 | 105 | 0.638235 | 769 | 6,120 | 4.951886 | 0.209363 | 0.07563 | 0.107143 | 0.02521 | 0.783351 | 0.778361 | 0.758929 | 0.758929 | 0.742647 | 0.742647 | 0 | 0.008127 | 0.235948 | 6,120 | 227 | 106 | 26.960352 | 0.806245 | 0.063072 | 0 | 0.779661 | 0 | 0 | 0.148206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.067797 | 0.025424 | null | null | 0.084746 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
0a4705453b3b8872c890eeaee27a0b09e9f7ead2 | 4,797 | py | Python | tests/test_chooser.py | kgreenek/py_trees | fe4e3302d078c33e48e2e9d21dd3bdd62f4546e7 | [
"BSD-3-Clause"
] | null | null | null | tests/test_chooser.py | kgreenek/py_trees | fe4e3302d078c33e48e2e9d21dd3bdd62f4546e7 | [
"BSD-3-Clause"
] | null | null | null | tests/test_chooser.py | kgreenek/py_trees | fe4e3302d078c33e48e2e9d21dd3bdd62f4546e7 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
#
# License: BSD
# https://raw.githubusercontent.com/stonier/py_trees/devel/LICENSE
#
##############################################################################
# Imports
##############################################################################
import py_trees
import py_trees.console as console
##############################################################################
# Logging Level
##############################################################################
py_trees.logging.level = py_trees.logging.Level.DEBUG
logger = py_trees.logging.Logger("Nosetest")
##############################################################################
# Tests
##############################################################################
def test_low_priority_runner():
console.banner("Low Priority Runner")
root = py_trees.composites.Chooser()
failure = py_trees.behaviours.Failure("Failure")
running = py_trees.behaviours.Running("Running")
root.add_child(failure)
root.add_child(running)
py_trees.display.print_ascii_tree(root)
visitor = py_trees.visitors.DebugVisitor()
py_trees.tests.tick_tree(root, visitor, 1, 1)
print("\n--------- Assertions ---------\n")
print("root.status == py_trees.common.Status.RUNNING")
assert(root.status == py_trees.common.Status.RUNNING)
print("failure.status == py_trees.common.Status.FAILURE")
assert(failure.status == py_trees.common.Status.FAILURE)
print("running.status == py_trees.common.Status.RUNNING")
assert(running.status == py_trees.common.Status.RUNNING)
py_trees.tests.tick_tree(root, visitor, 2, 2)
print("\n--------- Assertions ---------\n")
print("root.status == py_trees.common.Status.RUNNING")
assert(root.status == py_trees.common.Status.RUNNING)
print("failure.status == py_trees.common.Status.INVALID")
assert(failure.status == py_trees.common.Status.INVALID)
print("running.status == py_trees.common.Status.RUNNING")
assert(running.status == py_trees.common.Status.RUNNING)
def test_low_priority_success():
console.banner("Low Priority Success")
root = py_trees.composites.Chooser()
failure = py_trees.behaviours.Failure("Failure")
success = py_trees.behaviours.Success("Success")
root.add_child(failure)
root.add_child(success)
py_trees.display.print_ascii_tree(root)
visitor = py_trees.visitors.DebugVisitor()
py_trees.tests.tick_tree(root, visitor, 1, 1)
print("\n--------- Assertions ---------\n")
print("root.status == py_trees.common.Status.SUCCESS")
assert(root.status == py_trees.common.Status.SUCCESS)
print("failure.status == py_trees.common.Status.FAILURE")
assert(failure.status == py_trees.common.Status.FAILURE)
print("success.status == py_trees.common.Status.SUCCESS")
assert(success.status == py_trees.common.Status.SUCCESS)
py_trees.tests.tick_tree(root, visitor, 2, 2)
# make sure both children are ticked again (different to above)
print("\n--------- Assertions ---------\n")
print("root.status == py_trees.common.Status.SUCCESS")
assert(root.status == py_trees.common.Status.SUCCESS)
print("failure.status == py_trees.common.Status.FAILURE")
assert(failure.status == py_trees.common.Status.FAILURE)
print("success.status == py_trees.common.Status.SUCCESS")
assert(success.status == py_trees.common.Status.SUCCESS)
def test_higher_priority_ignore():
console.banner("Ignore Higher Priority")
root = py_trees.composites.Chooser()
ping_pong = py_trees.behaviours.SuccessEveryN("Ping Pong", 2)
running = py_trees.behaviours.Running("Running")
root.add_child(ping_pong)
root.add_child(running)
py_trees.display.print_ascii_tree(root)
visitor = py_trees.visitors.DebugVisitor()
py_trees.tests.tick_tree(root, visitor, 1, 1)
print("\n--------- Assertions ---------\n")
print("root.status == py_trees.common.Status.RUNNING")
assert(root.status == py_trees.common.Status.RUNNING)
print("ping_pong.status == py_trees.common.Status.FAILURE")
assert(ping_pong.status == py_trees.common.Status.FAILURE)
print("running.status == py_trees.common.Status.RUNNING")
assert(running.status == py_trees.common.Status.RUNNING)
py_trees.tests.tick_tree(root, visitor, 2, 2)
print("\n--------- Assertions ---------\n")
print("root.status == py_trees.common.Status.RUNNING")
assert(root.status == py_trees.common.Status.RUNNING)
print("ping_pong.status == py_trees.common.Status.INVALID")
assert(ping_pong.status == py_trees.common.Status.INVALID) # got invalidated and didnt get ticked
print("running.status == py_trees.common.Status.RUNNING")
assert(running.status == py_trees.common.Status.RUNNING)
| 41.713043 | 102 | 0.640609 | 574 | 4,797 | 5.186411 | 0.130662 | 0.148136 | 0.157205 | 0.229762 | 0.82432 | 0.814914 | 0.798119 | 0.767887 | 0.74303 | 0.698354 | 0 | 0.003082 | 0.1207 | 4,797 | 114 | 103 | 42.078947 | 0.702703 | 0.047321 | 0 | 0.746835 | 0 | 0 | 0.285191 | 0.131965 | 0 | 0 | 0 | 0 | 0.303797 | 1 | 0.037975 | false | 0 | 0.025316 | 0 | 0.063291 | 0.341772 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0a4b24fe9c13f156637c43b6ee2badaa130fe33c | 6,664 | py | Python | conans/test/functional/old/runner_test.py | laundry-96/conan | fd938f7220ca042d94c42ec5eb607ee69c6785a3 | [
"MIT"
] | 1 | 2021-06-14T01:39:27.000Z | 2021-06-14T01:39:27.000Z | conans/test/functional/old/runner_test.py | laundry-96/conan | fd938f7220ca042d94c42ec5eb607ee69c6785a3 | [
"MIT"
] | 2 | 2018-02-22T21:28:04.000Z | 2018-09-28T13:51:47.000Z | conans/test/functional/old/runner_test.py | laundry-96/conan | fd938f7220ca042d94c42ec5eb607ee69c6785a3 | [
"MIT"
] | 1 | 2021-06-03T23:08:43.000Z | 2021-06-03T23:08:43.000Z | import os
import unittest
import six
from conans.client.runner import ConanRunner
from conans.test.utils.tools import TestClient
class RunnerTest(unittest.TestCase):
def _install_and_build(self, conanfile_text, runner=None):
client = TestClient(runner=runner)
files = {"conanfile.py": conanfile_text}
test_folder = os.path.join(client.current_folder, "test_folder")
self.assertFalse(os.path.exists(test_folder))
client.save(files)
client.run("install .")
client.run("build .")
return client
def ignore_error_test(self):
conanfile = """from conans import ConanFile
class Pkg(ConanFile):
def source(self):
ret = self.run("not_a_command", ignore_errors=True)
self.output.info("RETCODE %s" % (ret!=0))
"""
client = TestClient()
client.save({"conanfile.py": conanfile})
client.run("source .")
self.assertIn("RETCODE True", client.out)
def basic_test(self):
conanfile = '''
from conans import ConanFile
from conans.client.runner import ConanRunner
import platform
class ConanFileToolsTest(ConanFile):
def build(self):
self._runner = ConanRunner()
self.run("mkdir test_folder")
'''
client = self._install_and_build(conanfile)
test_folder = os.path.join(client.current_folder, "test_folder")
self.assertTrue(os.path.exists(test_folder))
def test_write_to_stringio(self):
runner = ConanRunner(print_commands_to_output=True,
generate_run_log_file=True,
log_run_to_output=True)
out = six.StringIO()
runner("python --version", output=out)
self.assertIn("""---Running------
> python --version
-----------------""", out.getvalue())
def log_test(self):
conanfile = '''
from conans import ConanFile
from conans.client.runner import ConanRunner
import platform
class ConanFileToolsTest(ConanFile):
def build(self):
self.run("cmake --version")
'''
# A runner logging everything
runner = ConanRunner(print_commands_to_output=True,
generate_run_log_file=True,
log_run_to_output=True)
client = self._install_and_build(conanfile, runner=runner)
self.assertIn("--Running---", client.user_io.out)
self.assertIn("> cmake --version", client.user_io.out)
self.assertIn("cmake version", client.user_io.out)
self.assertIn("Logging command output to file ", client.user_io.out)
# A runner logging everything
runner = ConanRunner(print_commands_to_output=True,
generate_run_log_file=False,
log_run_to_output=True)
client = self._install_and_build(conanfile, runner=runner)
self.assertIn("--Running---", client.user_io.out)
self.assertIn("> cmake --version", client.user_io.out)
self.assertIn("cmake version", client.user_io.out)
self.assertNotIn("Logging command output to file ", client.user_io.out)
runner = ConanRunner(print_commands_to_output=False,
generate_run_log_file=True,
log_run_to_output=True)
client = self._install_and_build(conanfile, runner=runner)
self.assertNotIn("--Running---", client.user_io.out)
self.assertNotIn("> cmake --version", client.user_io.out)
self.assertIn("cmake version", client.user_io.out)
self.assertIn("Logging command output to file ", client.user_io.out)
runner = ConanRunner(print_commands_to_output=False,
generate_run_log_file=False,
log_run_to_output=True)
client = self._install_and_build(conanfile, runner=runner)
self.assertNotIn("--Running---", client.user_io.out)
self.assertNotIn("> cmake --version", client.user_io.out)
self.assertIn("cmake version", client.user_io.out)
self.assertNotIn("Logging command output to file ", client.user_io.out)
runner = ConanRunner(print_commands_to_output=False,
generate_run_log_file=False,
log_run_to_output=False)
client = self._install_and_build(conanfile, runner=runner)
self.assertNotIn("--Running---", client.user_io.out)
self.assertNotIn("> cmake --version", client.user_io.out)
self.assertNotIn("cmake version", client.user_io.out)
self.assertNotIn("Logging command output to file ", client.user_io.out)
runner = ConanRunner(print_commands_to_output=False,
generate_run_log_file=True,
log_run_to_output=False)
client = self._install_and_build(conanfile, runner=runner)
self.assertNotIn("--Running---", client.user_io.out)
self.assertNotIn("> cmake --version", client.user_io.out)
self.assertNotIn("cmake version", client.user_io.out)
self.assertIn("Logging command output to file ", client.user_io.out)
def cwd_test(self):
conanfile = '''
from conans import ConanFile
from conans.client.runner import ConanRunner
import platform
class ConanFileToolsTest(ConanFile):
def build(self):
self._runner = ConanRunner()
self.run("mkdir test_folder", cwd="child_folder")
'''
files = {"conanfile.py": conanfile}
client = TestClient()
os.makedirs(os.path.join(client.current_folder, "child_folder"))
test_folder = os.path.join(client.current_folder, "child_folder", "test_folder")
self.assertFalse(os.path.exists(test_folder))
client.save(files)
client.run("install .")
client.run("build .")
self.assertTrue(os.path.exists(test_folder))
def cwd_error_test(self):
conanfile = '''
from conans import ConanFile
from conans.client.runner import ConanRunner
import platform
class ConanFileToolsTest(ConanFile):
def build(self):
self._runner = ConanRunner()
self.run("mkdir test_folder", cwd="non_existing_folder")
'''
files = {"conanfile.py": conanfile}
client = TestClient()
test_folder = os.path.join(client.current_folder, "child_folder", "test_folder")
self.assertFalse(os.path.exists(test_folder))
client.save(files)
client.run("install .")
client.run("build .", assert_error=True)
self.assertIn("Error while executing 'mkdir test_folder'", client.user_io.out)
self.assertFalse(os.path.exists(test_folder))
| 38.520231 | 88 | 0.643457 | 775 | 6,664 | 5.326452 | 0.108387 | 0.060562 | 0.072674 | 0.090843 | 0.843992 | 0.840359 | 0.822674 | 0.79094 | 0.760417 | 0.760417 | 0 | 0.000198 | 0.241297 | 6,664 | 172 | 89 | 38.744186 | 0.816258 | 0.008253 | 0 | 0.741259 | 0 | 0 | 0.2802 | 0.02967 | 0 | 0 | 0 | 0 | 0.237762 | 1 | 0.048951 | false | 0 | 0.125874 | 0 | 0.188811 | 0.048951 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6a93c5e136048912053a1448926927405b8bfb37 | 141 | py | Python | chainer_mask_rcnn/utils/evaluations/__init__.py | m3at/chainer-mask-rcnn | fa491663675cdc97974008becc99454d5e6e1d09 | [
"MIT"
] | 61 | 2018-04-04T07:09:32.000Z | 2021-11-12T19:54:23.000Z | chainer_mask_rcnn/utils/evaluations/__init__.py | Swall0w/chainer-mask-rcnn | 83366fc77e52aa6a29cfac4caa697d8b45dcffc6 | [
"MIT"
] | 15 | 2018-04-10T10:48:47.000Z | 2021-05-20T10:00:42.000Z | chainer_mask_rcnn/utils/evaluations/__init__.py | Swall0w/chainer-mask-rcnn | 83366fc77e52aa6a29cfac4caa697d8b45dcffc6 | [
"MIT"
] | 18 | 2018-07-06T10:13:56.000Z | 2022-03-02T12:25:31.000Z | # flake8: noqa
from .eval_instance_segmentation_voc import eval_instseg_voc
from .eval_instance_segmentation_coco import eval_instseg_coco
| 23.5 | 62 | 0.879433 | 20 | 141 | 5.7 | 0.5 | 0.140351 | 0.280702 | 0.491228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007813 | 0.092199 | 141 | 5 | 63 | 28.2 | 0.882813 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
6ab136b27e404b28e102d63563d0804faa8cc9c1 | 207 | py | Python | paste/controllers/hooks.py | Afonasev/Paste | ca1dcb566f15a9cf1aa0e97c6fc4cf4d450ec89d | [
"MIT"
] | null | null | null | paste/controllers/hooks.py | Afonasev/Paste | ca1dcb566f15a9cf1aa0e97c6fc4cf4d450ec89d | [
"MIT"
] | 1 | 2018-05-07T00:12:59.000Z | 2018-05-07T00:12:59.000Z | paste/controllers/hooks.py | Afonasev/Paste | ca1dcb566f15a9cf1aa0e97c6fc4cf4d450ec89d | [
"MIT"
] | null | null | null | from bottle import hook, request
@hook('before_request')
def strip_path():
"""
Ignore trailing slashes in routes
"""
request.environ['PATH_INFO'] = request.environ['PATH_INFO'].rstrip('/')
| 20.7 | 75 | 0.671498 | 25 | 207 | 5.4 | 0.68 | 0.207407 | 0.266667 | 0.325926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 207 | 9 | 76 | 23 | 0.789474 | 0.15942 | 0 | 0 | 0 | 0 | 0.208861 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6acbf5c2db30dd0f467f1cb82e953181db4884a2 | 7,675 | py | Python | tests/components/media_player/test_async_helpers.py | EmitKiwi/home-assistant | 0999e2ddc476f4bddf710005168b082f03a7cdc0 | [
"Apache-2.0"
] | 4 | 2019-05-14T20:33:43.000Z | 2021-09-25T14:56:08.000Z | tests/components/media_player/test_async_helpers.py | EmitKiwi/home-assistant | 0999e2ddc476f4bddf710005168b082f03a7cdc0 | [
"Apache-2.0"
] | 5 | 2022-03-01T06:31:03.000Z | 2022-03-31T07:20:45.000Z | tests/components/media_player/test_async_helpers.py | EmitKiwi/home-assistant | 0999e2ddc476f4bddf710005168b082f03a7cdc0 | [
"Apache-2.0"
] | 3 | 2018-08-27T10:08:30.000Z | 2020-07-04T10:07:03.000Z | """The tests for the Async Media player helper functions."""
import unittest
import asyncio
import homeassistant.components.media_player as mp
from homeassistant.const import (
STATE_PLAYING, STATE_PAUSED, STATE_ON, STATE_OFF, STATE_IDLE)
from homeassistant.util.async import run_coroutine_threadsafe
from tests.common import get_test_home_assistant
class AsyncMediaPlayer(mp.MediaPlayerDevice):
"""Async media player test class."""
def __init__(self, hass):
"""Initialize the test media player."""
self.hass = hass
self._volume = 0
self._state = STATE_OFF
@property
def state(self):
"""State of the player."""
return self._state
@property
def volume_level(self):
"""Volume level of the media player (0..1)."""
return self._volume
@asyncio.coroutine
def async_set_volume_level(self, volume):
"""Set volume level, range 0..1."""
self._volume = volume
@asyncio.coroutine
def async_media_play(self):
"""Send play command."""
self._state = STATE_PLAYING
@asyncio.coroutine
def async_media_pause(self):
"""Send pause command."""
self._state = STATE_PAUSED
@asyncio.coroutine
def async_turn_on(self):
"""Turn the media player on."""
self._state = STATE_ON
@asyncio.coroutine
def async_turn_off(self):
"""Turn the media player off."""
self._state = STATE_OFF
class SyncMediaPlayer(mp.MediaPlayerDevice):
"""Sync media player test class."""
def __init__(self, hass):
"""Initialize the test media player."""
self.hass = hass
self._volume = 0
self._state = STATE_OFF
@property
def state(self):
"""State of the player."""
return self._state
@property
def volume_level(self):
"""Volume level of the media player (0..1)."""
return self._volume
def set_volume_level(self, volume):
"""Set volume level, range 0..1."""
self._volume = volume
def volume_up(self):
"""Turn volume up for media player."""
if self.volume_level < 1:
self.set_volume_level(min(1, self.volume_level + .2))
def volume_down(self):
"""Turn volume down for media player."""
if self.volume_level > 0:
self.set_volume_level(max(0, self.volume_level - .2))
def media_play_pause(self):
"""Play or pause the media player."""
if self._state == STATE_PLAYING:
self._state = STATE_PAUSED
else:
self._state = STATE_PLAYING
def toggle(self):
"""Toggle the power on the media player."""
if self._state in [STATE_OFF, STATE_IDLE]:
self._state = STATE_ON
else:
self._state = STATE_OFF
@asyncio.coroutine
def async_media_play_pause(self):
"""Create a coroutine to wrap the future returned by ABC.
This allows the run_coroutine_threadsafe helper to be used.
"""
yield from super().async_media_play_pause()
@asyncio.coroutine
def async_toggle(self):
"""Create a coroutine to wrap the future returned by ABC.
This allows the run_coroutine_threadsafe helper to be used.
"""
yield from super().async_toggle()
class TestAsyncMediaPlayer(unittest.TestCase):
"""Test the media_player module."""
def setUp(self): # pylint: disable=invalid-name
"""Setup things to be run when tests are started."""
self.hass = get_test_home_assistant()
self.player = AsyncMediaPlayer(self.hass)
def tearDown(self):
"""Shut down test instance."""
self.hass.stop()
def test_volume_up(self):
"""Test the volume_up helper function."""
self.assertEqual(self.player.volume_level, 0)
run_coroutine_threadsafe(
self.player.async_set_volume_level(0.5), self.hass.loop).result()
self.assertEqual(self.player.volume_level, 0.5)
run_coroutine_threadsafe(
self.player.async_volume_up(), self.hass.loop).result()
self.assertEqual(self.player.volume_level, 0.6)
def test_volume_down(self):
"""Test the volume_down helper function."""
self.assertEqual(self.player.volume_level, 0)
run_coroutine_threadsafe(
self.player.async_set_volume_level(0.5), self.hass.loop).result()
self.assertEqual(self.player.volume_level, 0.5)
run_coroutine_threadsafe(
self.player.async_volume_down(), self.hass.loop).result()
self.assertEqual(self.player.volume_level, 0.4)
def test_media_play_pause(self):
"""Test the media_play_pause helper function."""
self.assertEqual(self.player.state, STATE_OFF)
run_coroutine_threadsafe(
self.player.async_media_play_pause(), self.hass.loop).result()
self.assertEqual(self.player.state, STATE_PLAYING)
run_coroutine_threadsafe(
self.player.async_media_play_pause(), self.hass.loop).result()
self.assertEqual(self.player.state, STATE_PAUSED)
def test_toggle(self):
"""Test the toggle helper function."""
self.assertEqual(self.player.state, STATE_OFF)
run_coroutine_threadsafe(
self.player.async_toggle(), self.hass.loop).result()
self.assertEqual(self.player.state, STATE_ON)
run_coroutine_threadsafe(
self.player.async_toggle(), self.hass.loop).result()
self.assertEqual(self.player.state, STATE_OFF)
class TestSyncMediaPlayer(unittest.TestCase):
"""Test the media_player module."""
def setUp(self): # pylint: disable=invalid-name
"""Setup things to be run when tests are started."""
self.hass = get_test_home_assistant()
self.player = SyncMediaPlayer(self.hass)
def tearDown(self):
"""Shut down test instance."""
self.hass.stop()
def test_volume_up(self):
"""Test the volume_up helper function."""
self.assertEqual(self.player.volume_level, 0)
self.player.set_volume_level(0.5)
self.assertEqual(self.player.volume_level, 0.5)
run_coroutine_threadsafe(
self.player.async_volume_up(), self.hass.loop).result()
self.assertEqual(self.player.volume_level, 0.7)
def test_volume_down(self):
"""Test the volume_down helper function."""
self.assertEqual(self.player.volume_level, 0)
self.player.set_volume_level(0.5)
self.assertEqual(self.player.volume_level, 0.5)
run_coroutine_threadsafe(
self.player.async_volume_down(), self.hass.loop).result()
self.assertEqual(self.player.volume_level, 0.3)
def test_media_play_pause(self):
"""Test the media_play_pause helper function."""
self.assertEqual(self.player.state, STATE_OFF)
run_coroutine_threadsafe(
self.player.async_media_play_pause(), self.hass.loop).result()
self.assertEqual(self.player.state, STATE_PLAYING)
run_coroutine_threadsafe(
self.player.async_media_play_pause(), self.hass.loop).result()
self.assertEqual(self.player.state, STATE_PAUSED)
def test_toggle(self):
"""Test the toggle helper function."""
self.assertEqual(self.player.state, STATE_OFF)
run_coroutine_threadsafe(
self.player.async_toggle(), self.hass.loop).result()
self.assertEqual(self.player.state, STATE_ON)
run_coroutine_threadsafe(
self.player.async_toggle(), self.hass.loop).result()
self.assertEqual(self.player.state, STATE_OFF)
| 34.263393 | 77 | 0.651987 | 965 | 7,675 | 4.968912 | 0.110881 | 0.087591 | 0.095099 | 0.12513 | 0.81001 | 0.769135 | 0.744943 | 0.732013 | 0.732013 | 0.732013 | 0 | 0.007519 | 0.237524 | 7,675 | 223 | 78 | 34.41704 | 0.811859 | 0.007427 | 0 | 0.739437 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 0 | null | null | 0 | 0.042254 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0a9f77f08cadfb46371d7d2f4b0d36c62bed15a1 | 20,901 | py | Python | tests/test_cmdline.py | Carreau/pyflyby | c2f08342f30c520d0b02de463100e211f5edf220 | [
"BSD-3-Clause"
] | null | null | null | tests/test_cmdline.py | Carreau/pyflyby | c2f08342f30c520d0b02de463100e211f5edf220 | [
"BSD-3-Clause"
] | null | null | null | tests/test_cmdline.py | Carreau/pyflyby | c2f08342f30c520d0b02de463100e211f5edf220 | [
"BSD-3-Clause"
] | null | null | null | # pyflyby/test_cmdline.py
# License for THIS FILE ONLY: CC0 Public Domain Dedication
# http://creativecommons.org/publicdomain/zero/1.0/
from __future__ import (absolute_import, division, print_function,
with_statement)
from io import BytesIO
import os
import sys
import pexpect
import subprocess
import tempfile
from textwrap import dedent
from six import PY2, PY3
from pyflyby._util import EnvVarCtx
PYFLYBY_HOME = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
BIN_DIR = os.path.join(PYFLYBY_HOME, "bin")
python = sys.executable
def pipe(command, stdin=""):
return subprocess.Popen(
[python] + command,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
).communicate(stdin.encode('utf-8'))[0].decode('utf-8').strip()
def test_tidy_imports_stdin_1():
result = pipe([BIN_DIR+"/tidy-imports"], stdin="os, sys")
expected = dedent('''
[PYFLYBY] /dev/stdin: added 'import os'
[PYFLYBY] /dev/stdin: added 'import sys'
[PYFLYBY] /dev/stdin: added mandatory 'from __future__ import absolute_import'
[PYFLYBY] /dev/stdin: added mandatory 'from __future__ import division'
from __future__ import absolute_import, division
import os
import sys
os, sys
''').strip()
assert result == expected
def test_tidy_imports_quiet_1():
result = pipe([BIN_DIR+"/tidy-imports", "--quiet"], stdin="os, sys")
expected = dedent('''
from __future__ import absolute_import, division
import os
import sys
os, sys
''').strip()
assert result == expected
def test_tidy_imports_log_level_1():
with EnvVarCtx(PYFLYBY_LOG_LEVEL="WARNING"):
result = pipe([BIN_DIR+"/tidy-imports"], stdin="os, sys")
expected = dedent('''
from __future__ import absolute_import, division
import os
import sys
os, sys
''').strip()
assert result == expected
def test_tidy_imports_filename_action_print_1():
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(dedent('''
# hello
def foo():
foo() + os + sys
''').lstrip())
f.flush()
result = pipe([BIN_DIR+"/tidy-imports", f.name])
expected = dedent('''
[PYFLYBY] {f.name}: added 'import os'
[PYFLYBY] {f.name}: added 'import sys'
[PYFLYBY] {f.name}: added mandatory 'from __future__ import absolute_import'
[PYFLYBY] {f.name}: added mandatory 'from __future__ import division'
# hello
from __future__ import absolute_import, division
import os
import sys
def foo():
foo() + os + sys
''').strip().format(f=f)
assert result == expected
def test_tidy_imports_filename_action_replace_1():
with tempfile.NamedTemporaryFile(suffix=".py", delete=False, mode='w+') as f:
f.write(dedent('''
"hello"
def foo():
foo() + os + sys
import a, b, c
a, c
''').lstrip())
name = f.name
cmd_output = pipe([BIN_DIR+"/tidy-imports", "-r", name])
expected_cmd_output = dedent('''
[PYFLYBY] {f.name}: removed unused 'import b'
[PYFLYBY] {f.name}: added 'import os'
[PYFLYBY] {f.name}: added 'import sys'
[PYFLYBY] {f.name}: added mandatory 'from __future__ import absolute_import'
[PYFLYBY] {f.name}: added mandatory 'from __future__ import division'
[PYFLYBY] {f.name}: *** modified ***
''').strip().format(f=f)
assert cmd_output == expected_cmd_output
with open(name) as f:
result = f.read()
expected_result = dedent('''
"hello"
from __future__ import absolute_import, division
import os
import sys
def foo():
foo() + os + sys
import a
import c
a, c
''').lstrip()
assert result == expected_result
os.unlink(name)
def test_tidy_imports_no_add_no_remove_1():
input = dedent('''
import a, b, c
a, c, os, sys
''').lstrip()
result = pipe([BIN_DIR+"/tidy-imports", "--no-add", "--no-remove"],
stdin=input)
expected = dedent('''
import a
import b
import c
a, c, os, sys
''').strip()
assert result == expected
def test_reformat_imports_1():
input = dedent('''
import zzt, megazeux
from zzt import MEGAZEUX
from ZZT import MEGAZEUX
code()
from megazeux import zzt
from zzt import *
import zzt as ZZT
code() #x
import zzt.zzt as zzt
code()
import zzt.foo as zzt
code() #x
''').strip()
result = pipe([BIN_DIR+"/reformat-imports"], stdin=input)
expected = dedent('''
from ZZT import MEGAZEUX
import megazeux
import zzt
code()
from megazeux import zzt
import zzt as ZZT
from zzt import *
code() #x
from zzt import zzt
code()
from zzt import foo as zzt
code() #x
''').strip()
assert result == expected
def test_collect_imports_1():
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(dedent('''
"hello"
from m1.m2 import f3, f4
def f5(): pass
def f6(): pass
from m3.m4 import f6, f4
import m1.m3
f6, f7, m5, m7
from m7 import *
''').lstrip())
f.flush()
result = pipe([BIN_DIR+"/collect-imports", f.name])
expected = dedent('''
from m1.m2 import f3, f4
import m1.m3
from m3.m4 import f4, f6
from m7 import *
''').strip()
assert result == expected
def test_collect_imports_include_1():
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(dedent('''
from m1.m2 import f3, f4
from m3.m4 import f6, f4
from m3.m5 import f7, f8
import m1.m3
from m7 import *
from m1 import f9
from .m1 import f5
from m1x import f6
import m1, m1y
''').lstrip())
f.flush()
result = pipe([BIN_DIR+"/collect-imports", f.name,
"--include=m1",
"--include=m3.m5"])
expected = dedent('''
import m1
from m1 import f9
from m1.m2 import f3, f4
import m1.m3
from m3.m5 import f7, f8
''').strip()
assert result == expected
def test_collect_imports_include_dot_1():
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(dedent('''
from m1.m2 import f3, f4
from m3.m4 import f6, f4
import m1.m3
from m7 import *
from m1 import f9
from .m1 import f5
from m1x import f6
''').lstrip())
f.flush()
result = pipe([BIN_DIR+"/collect-imports", f.name, "--include=."])
expected = dedent('''
from .m1 import f5
''').strip()
assert result == expected
def test_collect_exports_1():
result = pipe([BIN_DIR+"/collect-exports", "fractions"])
expected = dedent('''
from fractions import Fraction, gcd
''').strip()
assert result == expected
def test_find_import_1():
result = pipe([BIN_DIR+"/find-import", "np"])
expected = 'import numpy as np'
assert result == expected
def test_find_import_bad_1():
result = pipe([BIN_DIR+"/find-import", "omg_unknown_4223496"])
expected = "[PYFLYBY] Can't find import for 'omg_unknown_4223496'"
assert result == expected
def test_py_eval_1():
result = pipe([BIN_DIR+"/py", "-c", "b64decode('aGVsbG8=')"])
expected = dedent("""
[PYFLYBY] from base64 import b64decode
[PYFLYBY] b64decode('aGVsbG8=')
b'hello'
""").strip()
if PY2:
expected = expected.replace("b'hello'", "'hello'")
assert result == expected
def test_py_exec_1():
result = pipe([BIN_DIR+"/py", "-c", "if 1: print(b64decode('aGVsbG8='))"])
expected = dedent("""
[PYFLYBY] from base64 import b64decode
[PYFLYBY] if 1: print(b64decode('aGVsbG8='))
b'hello'
""").strip()
if PY2:
expected = expected.replace("b'hello'", "hello")
assert result == expected
def test_py_name_1():
result = pipe([BIN_DIR+"/py", "-c", "__name__"])
expected = dedent("""
[PYFLYBY] __name__
'__main__'
""").strip()
assert result == expected
def test_py_argv_1():
result = pipe([BIN_DIR+"/py", "-c", "sys.argv", "x", "y"])
expected = dedent("""
[PYFLYBY] import sys
[PYFLYBY] sys.argv
['-c', 'x', 'y']
""").strip()
assert result == expected
def test_py_argv_2():
result = pipe([BIN_DIR+"/py", "-c", "sys.argv", "--debug", "-x x"])
expected = dedent("""
[PYFLYBY] import sys
[PYFLYBY] sys.argv
['-c', '--debug', '-x x']
""").strip()
assert result == expected
def test_py_file_1():
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write('print(sys.argv)\n')
f.flush()
result = pipe([BIN_DIR+"/py", f.name, "a", "b"])
expected = dedent("""
[PYFLYBY] import sys
[%r, 'a', 'b']
""").strip() % (f.name,)
assert result == expected
def test_tidy_imports_query_no_change_1():
input = dedent('''
from __future__ import absolute_import, division
import x1
x1
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', f.name], timeout=5.0)
child.logfile = BytesIO()
# We expect no "Replace [y/N]" query, since nothing changed.
child.expect(pexpect.EOF)
with open(f.name) as f2:
output = f2.read()
proc_output = child.logfile.getvalue()
assert proc_output == b""
assert output == input
def test_tidy_imports_query_y_1():
input = dedent('''
from __future__ import absolute_import, division
import x1, x2
x1
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', f.name], timeout=5.0)
child.logfile = BytesIO()
child.expect_exact(" [y/N]")
child.send("y\n")
child.expect(pexpect.EOF)
with open(f.name) as f2:
output = f2.read()
proc_output = child.logfile.getvalue()
assert b"[y/N] y" in proc_output
expected = dedent("""
from __future__ import absolute_import, division
import x1
x1
""")
assert output == expected
def test_tidy_imports_query_n_1():
input = dedent('''
from __future__ import absolute_import, division
import x1, x2
x1
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', f.name], timeout=5.0)
child.logfile = BytesIO()
child.expect_exact(" [y/N]")
child.send("n\n")
child.expect(pexpect.EOF)
with open(f.name) as f2:
output = f2.read()
proc_output = child.logfile.getvalue()
assert b"[y/N] n" in proc_output
assert output == input
def test_tidy_imports_query_junk_1():
input = dedent('''
from __future__ import absolute_import, division
import x1, x2
x1
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', f.name], timeout=5.0)
child.logfile = BytesIO()
child.expect_exact(" [y/N]")
child.send("zxcv\n")
child.expect(pexpect.EOF)
with open(f.name) as f2:
output = f2.read()
proc_output = child.logfile.getvalue()
assert b"[y/N] zxcv" in proc_output
assert b"Aborted" in proc_output
assert output == input
# Note, these tests will fail if the system does not have both python2 and
# python3 in the PATH
def test_tidy_imports_py2_fallback():
input = dedent('''
import x
def f(*args, x=1):
pass
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', f.name], timeout=5.0)
child.logfile = BytesIO()
child.expect_exact(" [y/N]")
child.send("n\n")
child.expect(pexpect.EOF)
with open(f.name) as f2:
output = f2.read()
proc_output = child.logfile.getvalue()
assert b"removed unused 'import x'" in proc_output
assert output == input
if PY2:
assert b"SyntaxError detected" in proc_output, proc_output
assert b"falling back" in proc_output, proc_output
else:
assert b"SyntaxError detected" not in proc_output, proc_output
assert b"falling back" not in proc_output, proc_output
def test_tidy_imports_py3_fallback():
input = dedent('''
import x
print 1
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', f.name], timeout=5.0)
child.logfile = BytesIO()
child.expect_exact(" [y/N]")
child.send("n\n")
child.expect(pexpect.EOF)
with open(f.name) as f2:
output = f2.read()
proc_output = child.logfile.getvalue()
assert b"removed unused 'import x'" in proc_output
assert output == input
if PY3:
assert b"SyntaxError detected" in proc_output, proc_output
assert b"falling back" in proc_output, proc_output
else:
assert b"SyntaxError detected" not in proc_output, proc_output
assert b"falling back" not in proc_output, proc_output
def test_tidy_imports_symlinks_default():
input = dedent('''
import x
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
head, tail = os.path.split(f.name)
symlink_name = os.path.join(head, 'symlink-' + tail)
os.symlink(f.name, symlink_name)
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', symlink_name], timeout=5.0)
child.logfile = BytesIO()
# child.expect_exact(" [y/N]")
# child.send("n\n")
child.expect(pexpect.EOF)
assert not os.path.islink(f.name)
assert os.path.islink(symlink_name)
with open(f.name) as f2:
output = f2.read()
with open(symlink_name) as f2:
symlink_output = f2.read()
proc_output = child.logfile.getvalue()
assert b"Error: %s appears to be a symlink" % symlink_name.encode("utf-8") in proc_output
assert output == input
assert symlink_output == input
def test_tidy_imports_symlinks_error():
input = dedent('''
import x
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
head, tail = os.path.split(f.name)
symlink_name = os.path.join(head, 'symlink-' + tail)
os.symlink(f.name, symlink_name)
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', '--symlinks=error', symlink_name], timeout=5.0)
child.logfile = BytesIO()
# child.expect_exact(" [y/N]")
# child.send("n\n")
child.expect(pexpect.EOF)
assert not os.path.islink(f.name)
assert os.path.islink(symlink_name)
with open(f.name) as f2:
output = f2.read()
with open(symlink_name) as f2:
symlink_output = f2.read()
proc_output = child.logfile.getvalue()
assert b"Error: %s appears to be a symlink" % symlink_name.encode("utf-8") in proc_output
assert output == input
assert symlink_output == input
def test_tidy_imports_symlinks_follow():
input = dedent('''
import x
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
head, tail = os.path.split(f.name)
symlink_name = os.path.join(head, 'symlink-' + tail)
os.symlink(f.name, symlink_name)
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', '--symlinks=follow', symlink_name], timeout=5.0)
child.logfile = BytesIO()
child.expect_exact(" [y/N]")
child.send("y\n")
child.expect(pexpect.EOF)
assert not os.path.islink(f.name)
assert os.path.islink(symlink_name)
with open(f.name) as f2:
output = f2.read()
with open(symlink_name) as f2:
symlink_output = f2.read()
proc_output = child.logfile.getvalue()
assert b"Following symlink %s" % symlink_name.encode("utf-8") in proc_output
assert 'import x' not in output
assert 'import x' not in symlink_output
def test_tidy_imports_symlinks_skip():
input = dedent('''
import x
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
head, tail = os.path.split(f.name)
symlink_name = os.path.join(head, 'symlink-' + tail)
os.symlink(f.name, symlink_name)
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', '--symlinks=skip',
symlink_name], timeout=5.0)
child.logfile = BytesIO()
# child.expect_exact(" [y/N]")
# child.send("n\n")
child.expect(pexpect.EOF)
assert not os.path.islink(f.name)
assert os.path.islink(symlink_name)
with open(f.name) as f2:
output = f2.read()
with open(symlink_name) as f2:
symlink_output = f2.read()
proc_output = child.logfile.getvalue()
assert b"Skipping symlink %s" % symlink_name.encode("utf-8") in proc_output
assert output == input
assert symlink_output == input
def test_tidy_imports_symlinks_replace():
input = dedent('''
import x
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
head, tail = os.path.split(f.name)
symlink_name = os.path.join(head, 'symlink-' + tail)
os.symlink(f.name, symlink_name)
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', '--symlink=replace', symlink_name], timeout=5.0)
child.logfile = BytesIO()
child.expect_exact(" [y/N]")
child.send("y\n")
child.expect(pexpect.EOF)
assert not os.path.islink(f.name)
assert not os.path.islink(symlink_name)
with open(f.name) as f2:
output = f2.read()
with open(symlink_name) as f2:
symlink_output = f2.read()
proc_output = child.logfile.getvalue()
assert b"Replacing symlink %s" % symlink_name.encode("utf-8") in proc_output
assert output == input
assert 'import x' not in symlink_output
def test_tidy_imports_symlinks_bad_argument():
input = dedent('''
import x
''')
with tempfile.NamedTemporaryFile(suffix=".py", mode='w+') as f:
f.write(input)
f.flush()
head, tail = os.path.split(f.name)
symlink_name = os.path.join(head, 'symlink-' + tail)
os.symlink(f.name, symlink_name)
child = pexpect.spawn(python, [BIN_DIR+'/tidy-imports', '--symlinks=bad', symlink_name], timeout=5.0)
child.logfile = BytesIO()
# child.expect_exact(" [y/N]")
# child.send("n\n")
child.expect(pexpect.EOF)
assert not os.path.islink(f.name)
assert os.path.islink(symlink_name)
with open(f.name) as f2:
output = f2.read()
with open(symlink_name) as f2:
symlink_output = f2.read()
proc_output = child.logfile.getvalue()
assert b"error: --symlinks must be one of" in proc_output
assert output == input
assert symlink_output == input
| 31.909924 | 112 | 0.571073 | 2,599 | 20,901 | 4.447095 | 0.087341 | 0.022928 | 0.020765 | 0.028033 | 0.833103 | 0.799014 | 0.761723 | 0.728932 | 0.694757 | 0.668974 | 0 | 0.016421 | 0.297833 | 20,901 | 654 | 113 | 31.958716 | 0.771123 | 0.022487 | 0 | 0.748654 | 0 | 0 | 0.32739 | 0.005829 | 0 | 0 | 0 | 0 | 0.127469 | 1 | 0.057451 | false | 0.005386 | 0.305206 | 0.001795 | 0.364452 | 0.010772 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
0addbb69e2a520ec7a260fe93d1bd57b01af53b7 | 194 | py | Python | src/biker_portal/main/admin.py | KunalKavthekar/Biker-s-Portal | c306b7088956a8f2450c3daa5e953e6db7979a8d | [
"MIT"
] | null | null | null | src/biker_portal/main/admin.py | KunalKavthekar/Biker-s-Portal | c306b7088956a8f2450c3daa5e953e6db7979a8d | [
"MIT"
] | null | null | null | src/biker_portal/main/admin.py | KunalKavthekar/Biker-s-Portal | c306b7088956a8f2450c3daa5e953e6db7979a8d | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import ToDoList, Item, Search
# Register your models here.
# admin.site.register(ToDoList)
# admin.site.register(Item)
# admin.site.register(Search) | 32.333333 | 42 | 0.78866 | 27 | 194 | 5.666667 | 0.481481 | 0.176471 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103093 | 194 | 6 | 43 | 32.333333 | 0.87931 | 0.56701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0ae6a5d903ca4b8617d1269d6f48940299e13f76 | 267 | py | Python | curso python/exercicios/mundo-1/ex014/quebrandoumnumero.py | lucasrenandns/Python-3 | 284b93a5538b3978d57593b5a664b1d2d98c6e1a | [
"MIT"
] | 1 | 2022-02-09T19:05:05.000Z | 2022-02-09T19:05:05.000Z | curso python/exercicios/mundo-1/ex014/quebrandoumnumero.py | lucasrenandns/Python-3 | 284b93a5538b3978d57593b5a664b1d2d98c6e1a | [
"MIT"
] | null | null | null | curso python/exercicios/mundo-1/ex014/quebrandoumnumero.py | lucasrenandns/Python-3 | 284b93a5538b3978d57593b5a664b1d2d98c6e1a | [
"MIT"
] | null | null | null | '''from math import trunc
n = float(input("Digite um valor: "))
print('Seu valor digitado foi {} e sua porção inteira é {}!'.format(n, trunc(n)))'''
n = float(input('Digite um valor: '))
print('Seu valor digitado foi {} e sua porção inteira é {}!'.format(n, int(n))) | 44.5 | 84 | 0.655431 | 44 | 267 | 3.977273 | 0.477273 | 0.068571 | 0.125714 | 0.194286 | 0.834286 | 0.834286 | 0.834286 | 0.834286 | 0.834286 | 0.834286 | 0 | 0 | 0.149813 | 267 | 6 | 85 | 44.5 | 0.770925 | 0.531835 | 0 | 0 | 0 | 0 | 0.575 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
7c3054ea2946dafa4b3356ee78930c7a5d38f4ec | 9,730 | py | Python | decora_wifi/models/location.py | 7ooL/api_decora | 6a92a2d20c47e5b10702778255a863643cca3665 | [
"MIT"
] | null | null | null | decora_wifi/models/location.py | 7ooL/api_decora | 6a92a2d20c47e5b10702778255a863643cca3665 | [
"MIT"
] | null | null | null | decora_wifi/models/location.py | 7ooL/api_decora | 6a92a2d20c47e5b10702778255a863643cca3665 | [
"MIT"
] | null | null | null | # Leviton Cloud Services API model Location.
# Auto-generated by api_scraper.py.
#
# Copyright 2017 Tim Lyakhovetskiy <tlyakhov@gmail.com>
#
# This code is released under the terms of the MIT license. See the LICENSE
# file for more details.
from ..base_model import BaseModel
class Location(BaseModel):
def __init__(self, session, model_id=None):
super(Location, self).__init__(session, model_id)
@classmethod
def count(cls, session, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/count"
return session.call_api(api, attribs, 'get')
def count_feed_items(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/feedItems/count".format(self._id)
return self._session.call_api(api, attribs, 'get')
def count_holidays(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/holidays/count".format(self._id)
return self._session.call_api(api, attribs, 'get')
def count_installations(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/installations/count".format(self._id)
return self._session.call_api(api, attribs, 'get')
@classmethod
def create(cls, session, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations"
return session.call_api(api, attribs, 'post')
def create_holidays(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/holidays".format(self._id)
return self._session.call_api(api, attribs, 'post')
def create_installations(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/installations".format(self._id)
return self._session.call_api(api, attribs, 'post')
@classmethod
def create_many(cls, session, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations"
return session.call_api(api, attribs, 'post')
def delete_by_id(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}".format(self._id)
return self._session.call_api(api, attribs, 'delete')
def delete_holidays(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/holidays".format(self._id)
return self._session.call_api(api, attribs, 'delete')
def delete_installations(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/installations".format(self._id)
return self._session.call_api(api, attribs, 'delete')
def destroy_by_id_holidays(self, holiday_id, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/holidays/{1}".format(self._id, holiday_id)
return self._session.call_api(api, attribs, 'delete')
def destroy_by_id_installations(self, installation_id, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/installations/{1}".format(self._id, installation_id)
return self._session.call_api(api, attribs, 'delete')
def exists(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/exists".format(self._id)
return self._session.call_api(api, attribs, 'get')
@classmethod
def find(cls, session, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations"
items = session.call_api(api, attribs, 'get')
result = []
if items is not None:
for data in items:
model = Location(session, data['id'])
model.data = data
result.append(model)
return result
def find_by_id(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}".format(self._id)
data = self._session.call_api(api, attribs, 'get')
self.data.update(data)
return self
def find_by_id_feed_items(self, feed_item_id, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/feedItems/{1}".format(self._id, feed_item_id)
data = self._session.call_api(api, attribs, 'get')
from .feed_item import FeedItem
model = FeedItem(self._session, data['id'])
model.data = data
return model
def find_by_id_holidays(self, holiday_id, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/holidays/{1}".format(self._id, holiday_id)
return self._session.call_api(api, attribs, 'get')
def find_by_id_installations(self, installation_id, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/installations/{1}".format(self._id, installation_id)
data = self._session.call_api(api, attribs, 'get')
from .installation import Installation
model = Installation(self._session, data['id'])
model.data = data
return model
@classmethod
def find_one(cls, session, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/findOne"
return session.call_api(api, attribs, 'get')
def refresh(self):
api = "/Locations/{0}".format(self._id)
result = self._session.call_api(api, {}, 'get')
if result is not None:
self.data.update(result)
return self
def get_feed_items(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/feedItems".format(self._id)
items = self._session.call_api(api, attribs, 'get')
from .feed_item import FeedItem
result = []
if items is not None:
for data in items:
model = FeedItem(self._session, data['id'])
model.data = data
result.append(model)
return result
def get_holidays(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/holidays".format(self._id)
return self._session.call_api(api, attribs, 'get')
def get_installations(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/installations".format(self._id)
items = self._session.call_api(api, attribs, 'get')
from .installation import Installation
result = []
if items is not None:
for data in items:
model = Installation(self._session, data['id'])
model.data = data
result.append(model)
return result
def get_management_tier(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/managementTier".format(self._id)
data = self._session.call_api(api, attribs, 'get')
from .management_tier import ManagementTier
model = ManagementTier(self._session, data['id'])
model.data = data
return model
def get_organization(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/organization".format(self._id)
data = self._session.call_api(api, attribs, 'get')
from .organization import Organization
model = Organization(self._session, data['id'])
model.data = data
return model
def installers_near(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/installersNear".format(self._id)
return self._session.call_api(api, attribs, 'get')
def replace_by_id(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/replace".format(self._id)
return self._session.call_api(api, attribs, 'post')
@classmethod
def replace_or_create(cls, session, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/replaceOrCreate"
return session.call_api(api, attribs, 'post')
def update_attributes(self, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}".format(self._id)
data = self._session.call_api(api, attribs, 'put')
self.data.update(attribs)
return self
def update_by_id_holidays(self, holiday_id, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/holidays/{1}".format(self._id, holiday_id)
return self._session.call_api(api, attribs, 'put')
def update_by_id_installations(self, installation_id, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/{0}/installations/{1}".format(self._id, installation_id)
data = self._session.call_api(api, attribs, 'put')
from .installation import Installation
model = Installation(self._session, data['id'])
model.data = data
return model
@classmethod
def upsert(cls, session, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations"
data = session.call_api(api, attribs, 'put')
model = Location(session, data['id'])
model.data = data
return model
@classmethod
def upsert_with_where(cls, session, attribs=None):
if attribs is None:
attribs = {}
api = "/Locations/upsertWithWhere"
return session.call_api(api, attribs, 'post')
| 33.66782 | 82 | 0.596917 | 1,150 | 9,730 | 4.898261 | 0.085217 | 0.066394 | 0.084502 | 0.10261 | 0.847861 | 0.844133 | 0.830108 | 0.82354 | 0.796911 | 0.789988 | 0 | 0.005316 | 0.284687 | 9,730 | 288 | 83 | 33.784722 | 0.804023 | 0.02333 | 0 | 0.725322 | 1 | 0 | 0.097641 | 0.068991 | 0 | 0 | 0 | 0 | 0 | 1 | 0.150215 | false | 0 | 0.034335 | 0 | 0.334764 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7c3d2094cbf84e6e9ad81951a76083a946c39b48 | 2,126 | py | Python | src/visitor.py | xt271828/Ayakashi-lang | 209333c511644a8410ceb496256fda32fb1799dc | [
"Apache-2.0"
] | 1 | 2018-08-27T15:37:47.000Z | 2018-08-27T15:37:47.000Z | src/visitor.py | shiinamiyuki/Ayakashi-lang | 209333c511644a8410ceb496256fda32fb1799dc | [
"Apache-2.0"
] | null | null | null | src/visitor.py | shiinamiyuki/Ayakashi-lang | 209333c511644a8410ceb496256fda32fb1799dc | [
"Apache-2.0"
] | null | null | null | from abc import abstractmethod
class Visitor:
def __init__(self):
pass
@abstractmethod
def visit_cast_expr(self, node):
pass
@abstractmethod
def visit_if_stmt(self, node):
pass
@abstractmethod
def visit_while_stmt(self, node):
pass
@abstractmethod
def visit_binary_expr(self, node):
pass
@abstractmethod
def visit_number(self, node):
pass
@abstractmethod
def visit_identifier(self, node):
pass
@abstractmethod
def visit_unary_expr(self, node):
pass
@abstractmethod
def visit_declaration(self, node):
pass
@abstractmethod
def visit_block(self, node):
pass
@abstractmethod
def visit_func_def(self, node):
pass
@abstractmethod
def visit_func_def_arg(self, node):
pass
@abstractmethod
def visit_index(self, node):
pass
@abstractmethod
def visit_call(self, node):
pass
@abstractmethod
def visit_call_arg(self, node):
pass
@abstractmethod
def visit_string(self, node):
pass
@abstractmethod
def visit_return(self, node):
pass
@abstractmethod
def visit_struct(self, node):
pass
@abstractmethod
def visit_ref_type(self, node):
pass
@abstractmethod
def visit_ptr_type(self, node):
pass
@abstractmethod
def visit_c_header(self, node):
pass
@abstractmethod
def visit_c_definition(self, node):
pass
@abstractmethod
def visit_c_type(self, node):
pass
@abstractmethod
def visit_import(self, node):
pass
@abstractmethod
def visit_implementation(self, node):
pass
@abstractmethod
def visit_method_def(self, node):
pass
@abstractmethod
def visit_interface(self, node):
pass
@abstractmethod
def visit_impl_for(self, node):
pass
@abstractmethod
def visit_generic(self, node):
pass
@abstractmethod
def visit_type_inference(self, node):
pass | 17.42623 | 41 | 0.619473 | 230 | 2,126 | 5.508696 | 0.186957 | 0.411997 | 0.480663 | 0.595107 | 0.803473 | 0.803473 | 0.508287 | 0.06472 | 0 | 0 | 0 | 0 | 0.310442 | 2,126 | 122 | 42 | 17.42623 | 0.864256 | 0 | 0 | 0.648352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.32967 | false | 0.32967 | 0.021978 | 0 | 0.362637 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
7c7a8e891fe3b38fe18fa1e1e19733a188efa927 | 218 | py | Python | ummon/datasets/__init__.py | matherm/ummon3 | 08476d21ce17cc95180525d48202a1690dfc8a08 | [
"BSD-3-Clause"
] | 1 | 2022-02-10T06:47:13.000Z | 2022-02-10T06:47:13.000Z | ummon/datasets/__init__.py | matherm/ummon3 | 08476d21ce17cc95180525d48202a1690dfc8a08 | [
"BSD-3-Clause"
] | null | null | null | ummon/datasets/__init__.py | matherm/ummon3 | 08476d21ce17cc95180525d48202a1690dfc8a08 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from .memory_dataset import *
from .merge_dataset import *
from .siamese_dataset import *
from .triplet_dataset import *
from .unsupervised_dataset import *
from .pre_transform_dataset import *
| 27.25 | 36 | 0.770642 | 28 | 218 | 5.75 | 0.464286 | 0.484472 | 0.52795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005291 | 0.133028 | 218 | 7 | 37 | 31.142857 | 0.846561 | 0.09633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7cba9def6a90bde58d8e8390f902dd622e76817f | 50,951 | py | Python | test/phonon/test_irreps.py | flokno/phonopy | 02e31d5998de0a9b664b67968bb511e21c400574 | [
"BSD-3-Clause"
] | 1 | 2021-04-16T07:51:16.000Z | 2021-04-16T07:51:16.000Z | test/phonon/test_irreps.py | flokno/phonopy | 02e31d5998de0a9b664b67968bb511e21c400574 | [
"BSD-3-Clause"
] | 4 | 2018-09-18T08:12:43.000Z | 2020-10-20T23:37:28.000Z | test/phonon/test_irreps.py | flokno/phonopy | 02e31d5998de0a9b664b67968bb511e21c400574 | [
"BSD-3-Clause"
] | 1 | 2021-09-17T08:21:30.000Z | 2021-09-17T08:21:30.000Z | import unittest
import os
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
import numpy as np
from phonopy import Phonopy
from phonopy.interface.vasp import read_vasp
from phonopy.file_IO import parse_FORCE_SETS
data_dir = os.path.dirname(os.path.abspath(__file__))
chars_P2 = """ 1. 0. 1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. 1. 0."""
chars_Pc = """ 1. 0. -1. 0.
1. 0. 1. 0.
1. 0. 1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0.
1. 0. -1. 0.
1. 0. 1. 0.
1. 0. -1. 0."""
chars_P222_1 = """ 1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0."""
chars_Amm2 = """1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0."""
chars_P4_1 = """ 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -0. 0. -2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
2. 0. -0. 0. -2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -0. 0. -2. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -0. 0. -2. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -0. 0. -2. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0."""
chars_Pbar4 = """ 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -0. 0. -2. 0. -0. 0.
2. 0. -0. 0. -2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -0. 0. -2. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
2. 0. 0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -0. 0. -2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
2. 0. -0. 0. -2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0."""
chars_I4_1a = """ 1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
2. 0. -0. 0. -2. 0. 0. 0. -2. 0. 0. 0. 2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. -0. 0. 2. 0. 0. 0. -2. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
2. 0. 0. 0. -2. 0. -0. 0. -2. 0. -0. 0. 2. 0. 0. 0.
2. 0. 0. 0. -2. 0. -0. 0. 2. 0. 0. 0. -2. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
2. 0. -0. 0. -2. 0. 0. 0. -2. 0. 0. 0. 2. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. -0. 0. -2. 0. -0. 0. 2. 0. 0. 0. -2. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -0. 0. -2. 0. 0. 0. -2. 0. 0. 0. 2. 0. -0. 0.
2. 0. 0. 0. -2. 0. -0. 0. 2. 0. 0. 0. -2. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0. -2. 0. 0. 0. 2. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. -0. 0. -2. 0. 0. 0. 2. 0. -0. 0. -2. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0."""
chars_P4mm = """ 2. 0. 0. 0. -2. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0.
2. 0. 0. 0. -2. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0."""
chars_Pbar42_1m = """ 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0. 0. 0. 0. 0. -0. 0. -0. 0.
2. 0. -0. 0. -2. 0. 0. 0. 0. 0. 0. 0. -0. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. -2. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 0. 0. -2. 0. -0. 0. 0. 0. -0. 0. 0. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
2. 0. 0. 0. -2. 0. -0. 0. -0. 0. -0. 0. 0. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0. 0. 0. 0. 0. -0. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 0. 0. -2. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0."""
chars_P3m1 = """ 2. 0. -1. 0. -1. 0. -0. 0. 0. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. -0. 0. 0. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 0. 0. -0. 0. 0. 0.
2. 0. -1. 0. -1. 0. -0. 0. 0. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0."""
chars_Pbar3m1 = """ 2. 0. -2. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 0. 0. 0. 0. -0. 0. 0. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. 2. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
2. 0. -2. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -2. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. 2. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -0. 0. -0. 0. -0. 0. 0. 0. 0. 0."""
chars_P6 = """ 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0."""
chars_Pbar6 = """ 2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0."""
chars_P6_222 = """ 2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0. 0. 0. -0. 0. -0. 0. -0. 0. 0. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0. 0. 0. -0. 0. 0. 0. -0. 0. 0. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. -0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0. -0. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0. -0. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. -0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. 0. 0. -0. 0. -0. 0. 0. 0. -0. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0. -0. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0. -0. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. 0. 0. -0. 0. -0. 0. 0. 0. -0. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. 0. 0. -0. 0. -0. 0. 0. 0. -0. 0. -0. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0. -0. 0. 0. 0. 0. 0. 0. 0. -0. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0. 0. 0. -0. 0. 0. 0. -0. 0. 0. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. 0. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. -0. 0. 0. 0. 0. 0. -0. 0. 0. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0. 0. 0. 0. 0. -0. 0. -0. 0. -0. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0. -0. 0."""
chars_Pbar6m2 = """ 2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. 0. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
2. 0. 1. 0. -1. 0. -2. 0. -1. 0. 1. 0. -0. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. 0. 0. -0. 0. -0. 0. 0. 0. -0. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. 0. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. -0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -1. 0. -1. 0. 2. 0. -1. 0. -1. 0. -0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0."""
chars_P2_13 = """ 3. 0. -1. 0. -1. 0. -1. 0. -0. 0. -0. 0. -0. 0. 0. 0. -0. 0. 0. 0. -0. 0. -0. 0.
2. 0. 2. 0. 2. 0. 2. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -0. 0. 0. 0. -0. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0.
3. 0. -1. 0. -1. 0. -1. 0. 0. 0. 0. 0. 0. 0. -0. 0. 0. 0. -0. 0. 0. 0. 0. 0.
3. 0. -1. 0. -1. 0. -1. 0. -0. 0. -0. 0. 0. 0. 0. 0. -0. 0. 0. 0. -0. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0. -0. 0. -0. 0.
2. 0. 2. 0. 2. 0. 2. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
3. 0. -1. 0. -1. 0. -1. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0. -0. 0. -0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -0. 0. 0. 0. -0. 0. -0. 0. -0. 0. -0. 0. 0. 0. -0. 0.
2. 0. 2. 0. 2. 0. 2. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
3. 0. -1. 0. -1. 0. -1. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0. -0. 0. 0. 0.
3. 0. -1. 0. -1. 0. -1. 0. 0. 0. -0. 0. 0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0."""
chars_Pabar3 = """ 3. 0. -3. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. -0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
3. 0. -3. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0.
2. 0. -2. 0. 2. 0. -2. 0. 2. 0. -2. 0. 2. 0. -2. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
3. 0. -3. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. -0. 0. -0. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0. 0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0. -0. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. 2. 0. 2. 0. 2. 0. 2. 0. 2. 0. 2. 0. 2. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0.
3. 0. 3. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -0. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0.
3. 0. -3. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
3. 0. -3. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. -0. 0. 0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0. 0. 0. 0. 0.
2. 0. -2. 0. 2. 0. -2. 0. 2. 0. -2. 0. 2. 0. -2. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0.
3. 0. 3. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. 0. 0. 0. 0. 0. 0. -0. 0. -0. 0. -0. 0. -0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. -0. 0. -0. 0.
3. 0. -3. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0. -0. 0. 0. 0. 0. 0. -0. 0.
3. 0. 3. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -0. 0. -0. 0. -0. 0. 0. 0. 0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. 0."""
chars_P4_332 = """ 3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0.
2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0.
2. 0. 0. 0. 2. 0. 0. 0. 2. 0. 0. 0. 2. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. -0. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0.
2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0.
2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0.
2. 0. 0. 0. 2. 0. 0. 0. 2. 0. -0. 0. 2. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0.
2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0.
2. 0. 0. 0. 2. 0. 0. 0. 2. 0. 0. 0. 2. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0.
2. 0. 0. 0. 2. 0. 0. 0. 2. 0. 0. 0. 2. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0.
2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. -0. 0. -1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. -0. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0.
2. 0. 0. 0. 2. 0. -0. 0. 2. 0. 0. 0. 2. 0. -0. 0. -1. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0.
2. 0. 0. 0. 2. 0. 0. 0. 2. 0. 0. 0. 2. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0.
2. 0. 0. 0. 2. 0. 0. 0. 2. 0. 0. 0. 2. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. -0. 0. -1. 0.
1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. 1. 0. -0. 0. -1. 0. -0. 0. -1. 0. 0. 0. 1. 0."""
chars_Pbar43m = """ 3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. -0. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. -0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0.
2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. 2. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. -0. 0. -1. 0.
3. 0. 1. 0. -1. 0. 1. 0. -1. 0. -1. 0. -1. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0. 0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. 1. 0. -0. 0. -1. 0.
1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0.
3. 0. -1. 0. -1. 0. -1. 0. -1. 0. 1. 0. -1. 0. 1. 0. -0. 0. 1. 0. 0. 0. -1. 0. 0. 0. 1. 0. -0. 0. -1. 0. -0. 0. -1. 0. -0. 0. 1. 0. 0. 0. 1. 0. 0. 0. -1. 0."""
class TestIrreps(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_pt03_P2(self):
data = self._load_data(chars_P2)
phonon = self._get_phonon("P2",
[3, 2, 2],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt04_Pc(self):
data = self._load_data(chars_Pc)
phonon = self._get_phonon("Pc",
[2, 2, 2],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt06_P222_1(self):
data = self._load_data(chars_P222_1)
phonon = self._get_phonon("P222_1",
[2, 2, 1],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt07_Amm2(self):
data = self._load_data(chars_Amm2)
phonon = self._get_phonon("Amm2",
[3, 2, 2],
[[1, 0, 0],
[0, 0.5, -0.5],
[0, 0.5, 0.5]])
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt09_P4_1(self):
data = self._load_data(chars_P4_1)
phonon = self._get_phonon("P4_1",
[2, 2, 1],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt10_Pbar4(self):
data = self._load_data(chars_Pbar4)
phonon = self._get_phonon("P-4",
[1, 1, 2],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt11_I4_1a(self):
data = self._load_data(chars_I4_1a)
phonon = self._get_phonon("I4_1a",
[2, 2, 1],
np.array([[-1, 1, 1],
[1, -1, 1],
[1, 1, -1]]) * 0.5)
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt13_P4mm(self):
data = self._load_data(chars_P4mm)
phonon = self._get_phonon("P4mm",
[3, 3, 2],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt14_Pbar42_1m(self):
data = self._load_data(chars_Pbar42_1m)
phonon = self._get_phonon("P-42_1m",
[2, 2, 3],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt19_P3m1(self):
data = self._load_data(chars_P3m1)
phonon = self._get_phonon("P3m1",
[4, 4, 2],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt20_Pbar3m1(self):
data = self._load_data(chars_Pbar3m1)
phonon = self._get_phonon("P-3m1",
[3, 3, 2],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt21_P6(self):
data = self._load_data(chars_P6)
phonon = self._get_phonon("P6",
[2, 2, 1],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt22_Pbar6(self):
data = self._load_data(chars_Pbar6)
phonon = self._get_phonon("P-6",
[1, 1, 3],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt24_P6_222(self):
data = self._load_data(chars_P6_222)
phonon = self._get_phonon("P6_222",
[2, 2, 2],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt26_Pbar6m2(self):
data = self._load_data(chars_Pbar6m2)
phonon = self._get_phonon("P-6m2",
[2, 2, 3],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt28_P2_13(self):
data = self._load_data(chars_P2_13)
phonon = self._get_phonon("P2_13",
[2, 2, 2],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt29_Pabar3(self):
data = self._load_data(chars_Pabar3)
phonon = self._get_phonon("Pa-3",
[2, 2, 2],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt30_P4_332(self):
data = self._load_data(chars_P4_332)
phonon = self._get_phonon("P4_332",
[1, 1, 1],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def test_pt31_Pbar43m(self):
data = self._load_data(chars_Pbar43m)
phonon = self._get_phonon("P-43m",
[2, 2, 2],
np.eye(3))
phonon.set_irreps([0, 0, 0])
chars = phonon.get_irreps().get_characters()
np.testing.assert_allclose(chars, data, atol=1e-5)
def _get_phonon(self, spgtype, dim, pmat):
cell = read_vasp(os.path.join(data_dir, "POSCAR_%s" % spgtype))
phonon = Phonopy(cell,
np.diag(dim),
primitive_matrix=pmat)
filename = os.path.join(data_dir, "FORCE_SETS_%s" % spgtype)
force_sets = parse_FORCE_SETS(filename=filename)
phonon.set_displacement_dataset(force_sets)
phonon.produce_force_constants()
print(phonon.get_symmetry().get_pointgroup())
return phonon
def _show_chars(self, chars):
for line in chars:
line_str = str(line.view(dtype='double').round(decimals=1))
print(line_str.replace("[", '').replace("]", ''))
def _load_data(self, data_str):
data = np.loadtxt(StringIO(data_str))
data = data.view(dtype="c%d" % (data.itemsize * 2))
return data
if __name__ == '__main__':
suite = unittest.TestLoader().loadTestsFromTestCase(TestIrreps)
unittest.TextTestRunner(verbosity=2).run(suite)
| 58.163242 | 210 | 0.296873 | 11,628 | 50,951 | 1.275026 | 0.012384 | 0.486982 | 0.728248 | 0.748685 | 0.904964 | 0.88392 | 0.863685 | 0.850938 | 0.850331 | 0.850196 | 0 | 0.374507 | 0.427823 | 50,951 | 875 | 211 | 58.229714 | 0.134051 | 0 | 0 | 0.766908 | 0 | 0.39372 | 0.827717 | 0 | 0 | 0 | 0 | 0 | 0.022947 | 1 | 0.028986 | false | 0.002415 | 0.01087 | 0 | 0.043478 | 0.002415 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15 |
7cf59a838dda9d07839775ce783f599c4493027b | 56,082 | py | Python | openstack_auth/tests/unit/test_auth.py | hemantsonawane95/horizon-apelby | 01a5e72219aeca8c1451701ee85e232ed0618751 | [
"Apache-2.0"
] | null | null | null | openstack_auth/tests/unit/test_auth.py | hemantsonawane95/horizon-apelby | 01a5e72219aeca8c1451701ee85e232ed0618751 | [
"Apache-2.0"
] | null | null | null | openstack_auth/tests/unit/test_auth.py | hemantsonawane95/horizon-apelby | 01a5e72219aeca8c1451701ee85e232ed0618751 | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import mock
import uuid
from django.conf import settings
from django.contrib import auth
from django import test
from django.test.utils import override_settings
from django.urls import reverse
from keystoneauth1 import exceptions as keystone_exceptions
from keystoneauth1.identity import v3 as v3_auth
from keystoneauth1 import session
from keystoneclient.v3 import client as client_v3
from keystoneclient.v3 import projects
from openstack_auth.plugin import password
from openstack_auth.tests import data_v3
from openstack_auth import utils
DEFAULT_DOMAIN = settings.OPENSTACK_KEYSTONE_DEFAULT_DOMAIN
# NOTE(e0ne): it's copy-pasted from horizon.test.helpers module until we
# figure out how to avoid this.
class IsA(object):
"""Class to compare param is a specified class."""
def __init__(self, cls):
self.cls = cls
def __eq__(self, other):
return isinstance(other, self.cls)
class SwitchProviderTests(test.TestCase):
interface = None
def setUp(self):
super().setUp()
params = {
'OPENSTACK_API_VERSIONS': {'identity': 3},
'OPENSTACK_KEYSTONE_URL': "http://localhost/identity/v3",
}
if self.interface:
params['OPENSTACK_ENDPOINT_TYPE'] = self.interface
override = self.settings(**params)
override.enable()
self.addCleanup(override.disable)
self.data = data_v3.generate_test_data()
self.ks_client_module = client_v3
def get_form_data(self, user):
return {'region': "default",
'domain': DEFAULT_DOMAIN,
'password': user.password,
'username': user.name}
@mock.patch.object(v3_auth, 'Keystone2Keystone')
@mock.patch.object(client_v3, 'Client')
@mock.patch.object(v3_auth, 'Token')
@mock.patch.object(v3_auth, 'Password')
def test_switch_keystone_provider_remote_fail(
self, mock_password, mock_token, mock_client, mock_k2k,
):
target_provider = 'k2kserviceprovider'
self.data = data_v3.generate_test_data(service_providers=True)
self.sp_data = data_v3.generate_test_data(endpoint='http://sp2')
projects = [self.data.project_one, self.data.project_two]
user = self.data.user
form_data = self.get_form_data(user)
auth_password = mock.Mock(
auth_url=settings.OPENSTACK_KEYSTONE_URL)
auth_password.get_access.side_effect = [
self.data.unscoped_access_info
]
mock_password.return_value = auth_password
auth_token_domain = mock.Mock(
auth_url=settings.OPENSTACK_KEYSTONE_URL)
auth_token_project1 = mock.Mock(
auth_url=settings.OPENSTACK_KEYSTONE_URL)
auth_token_unscoped = mock.Mock(
auth_url=settings.OPENSTACK_KEYSTONE_URL)
auth_token_project2 = mock.Mock(
auth_url=settings.OPENSTACK_KEYSTONE_URL)
mock_token.side_effect = [auth_token_domain,
auth_token_project1,
auth_token_unscoped,
auth_token_project2]
auth_token_domain.get_access.return_value = \
self.data.domain_scoped_access_info
auth_token_project1.get_access.return_value = \
self.data.unscoped_access_info
auth_token_unscoped.get_access.return_value = \
self.data.unscoped_access_info
auth_token_project2.get_access.return_value = \
self.data.unscoped_access_info
auth_token_project2.get_sp_auth_url.return_value = \
'https://k2kserviceprovider/sp_url'
client_domain = mock.Mock()
client_project1 = mock.Mock()
client_unscoped = mock.Mock()
mock_client.side_effect = [client_domain,
client_project1,
client_unscoped]
client_domain.projects.list.return_value = projects
client_unscoped.projects.list.return_value = projects
# let the K2K plugin fail when logging in
auth_k2k = mock.Mock()
auth_k2k.get_access.side_effect = \
keystone_exceptions.AuthorizationFailure
mock_k2k.return_value = auth_k2k
# Log in
url = reverse('login')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
# Switch
url = reverse('switch_keystone_provider', args=[target_provider])
form_data['keystone_provider'] = target_provider
response = self.client.get(url, form_data, follow=True)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
# Assert that provider has not changed because of failure
self.assertEqual(self.client.session['keystone_provider_id'],
'localkeystone')
# These should never change
self.assertEqual(self.client.session['k2k_base_unscoped_token'],
self.data.unscoped_access_info.auth_token)
self.assertEqual(self.client.session['k2k_auth_url'],
settings.OPENSTACK_KEYSTONE_URL)
mock_password.assert_called_once_with(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
password=self.data.user.password,
username=self.data.user.name,
user_domain_name=DEFAULT_DOMAIN,
unscoped=True,
)
auth_password.get_access.assert_called_once_with(IsA(session.Session))
mock_client.assert_has_calls([
mock.call(
session=IsA(session.Session),
auth=auth_password,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_project1,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_unscoped,
),
])
self.assertEqual(3, mock_client.call_count)
client_domain.projects.list.assert_called_once_with(user=user.id)
client_unscoped.projects.list.assert_called_once_with(user=user.id)
mock_token.assert_has_calls([
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
domain_name=DEFAULT_DOMAIN,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=self.data.project_one.id,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=None,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=self.data.project_one.id,
reauthenticate=False,
),
])
self.assertEqual(4, mock_token.call_count)
auth_token_domain.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_project1.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_unscoped.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_project2.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_project2.get_sp_auth_url.assert_called_once_with(
IsA(session.Session), target_provider)
mock_k2k.assert_called_once_with(
base_plugin=auth_token_project2,
service_provider=target_provider,
)
auth_k2k.get_access.assert_called_once_with(IsA(session.Session))
@mock.patch.object(v3_auth, 'Keystone2Keystone')
@mock.patch.object(client_v3, 'Client')
@mock.patch.object(v3_auth, 'Token')
@mock.patch.object(v3_auth, 'Password')
def test_switch_keystone_provider_remote(
self, mock_password, mock_token, mock_client, mock_k2k,
):
keystone_url = settings.OPENSTACK_KEYSTONE_URL
target_provider = 'k2kserviceprovider'
self.data = data_v3.generate_test_data(service_providers=True)
self.sp_data = data_v3.generate_test_data(endpoint='http://sp2')
projects = [self.data.project_one, self.data.project_two]
sp_projects = [self.sp_data.project_one, self.sp_data.project_two]
domains = []
user = self.data.user
form_data = self.get_form_data(user)
auth_password = mock.Mock(auth_url=keystone_url)
mock_password.return_value = auth_password
auth_password.get_access.return_value = self.data.unscoped_access_info
auth_token_domain = mock.Mock()
auth_token_scoped_1 = mock.Mock()
auth_token_unscoped = mock.Mock(auth_url=keystone_url)
auth_token_scoped_2 = mock.Mock()
auth_token_sp_unscoped = mock.Mock(auth_url=keystone_url)
auth_token_sp_scoped = mock.Mock()
mock_token.side_effect = [
auth_token_domain,
auth_token_scoped_1,
auth_token_unscoped,
auth_token_scoped_2,
auth_token_sp_unscoped,
auth_token_sp_scoped,
]
auth_token_domain.get_access.return_value = \
self.data.domain_scoped_access_info
auth_token_scoped_1.get_access.return_value = \
self.data.unscoped_access_info
auth_token_unscoped.get_access.return_value = \
self.data.unscoped_access_info
auth_token_scoped_2.get_access.return_value = \
settings.OPENSTACK_KEYSTONE_URL
auth_token_scoped_2.get_sp_auth_url.return_value = \
'https://k2kserviceprovider/sp_url'
auth_token_sp_unscoped.get_access.return_value = \
self.sp_data.federated_unscoped_access_info
auth_token_sp_scoped.get_access.return_value = \
self.sp_data.federated_unscoped_access_info
client_domain = mock.Mock()
client_scoped = mock.Mock()
client_unscoped = mock.Mock()
client_sp_unscoped_1 = mock.Mock()
client_sp_unscoped_2 = mock.Mock()
client_sp_scoped = mock.Mock()
mock_client.side_effect = [
client_domain,
client_scoped,
client_unscoped,
client_sp_unscoped_1,
client_sp_unscoped_2,
client_sp_scoped,
]
client_domain.projects.list.return_value = projects
client_unscoped.projects.list.return_value = projects
client_sp_unscoped_1.auth.domains.return_value = domains
client_sp_unscoped_2.federation.projects.list.return_value = \
sp_projects
auth_k2k = mock.Mock(
auth_url='http://service_provider_endp/identity/v3')
mock_k2k.return_value = auth_k2k
auth_k2k.get_access.return_value = self.sp_data.unscoped_access_info
# Log in
url = reverse('login')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
# Switch
url = reverse('switch_keystone_provider', args=[target_provider])
form_data['keystone_provider'] = target_provider
response = self.client.get(url, form_data, follow=True)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
# Assert keystone provider has changed
self.assertEqual(self.client.session['keystone_provider_id'],
target_provider)
# These should not change
self.assertEqual(self.client.session['k2k_base_unscoped_token'],
self.data.unscoped_access_info.auth_token)
self.assertEqual(self.client.session['k2k_auth_url'],
settings.OPENSTACK_KEYSTONE_URL)
mock_password.assert_called_once_with(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
password=self.data.user.password,
username=self.data.user.name,
user_domain_name=DEFAULT_DOMAIN,
unscoped=True,
)
auth_password.get_access.assert_called_once_with(IsA(session.Session))
mock_client.assert_has_calls([
mock.call(
session=IsA(session.Session),
auth=auth_password,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_scoped_1,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_unscoped,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_sp_unscoped,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_sp_unscoped,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_sp_scoped,
),
])
self.assertEqual(6, mock_client.call_count)
client_domain.projects.list.assert_called_once_with(user=user.id)
client_unscoped.projects.list.assert_called_once_with(user=user.id)
client_sp_unscoped_1.auth.domains.assert_called_once_with()
client_sp_unscoped_2.federation.projects.list.assert_called_once_with()
client_scoped.assert_not_called()
client_sp_scoped.assert_not_called()
mock_token.assert_has_calls([
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
domain_name=DEFAULT_DOMAIN,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=self.data.project_one.id,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=None,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=self.data.project_one.id,
reauthenticate=False,
),
mock.call(
auth_url='http://service_provider_endp/identity/v3',
token=self.sp_data.federated_unscoped_access_info.auth_token,
project_id=None,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.sp_data.federated_unscoped_access_info.auth_token,
project_id=self.sp_data.project_one.id,
reauthenticate=False,
),
])
self.assertEqual(6, mock_token.call_count)
auth_token_domain.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_scoped_1.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_unscoped.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_scoped_2.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_scoped_2.get_sp_auth_url.assert_called_once_with(
IsA(session.Session), target_provider)
auth_token_sp_unscoped.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_sp_scoped.get_access.assert_called_once_with(
IsA(session.Session))
mock_k2k.assert_called_once_with(
base_plugin=auth_token_scoped_2,
service_provider=target_provider,
)
auth_k2k.get_access.assert_called_once_with(IsA(session.Session))
@mock.patch.object(client_v3, 'Client')
@mock.patch.object(v3_auth, 'Token')
@mock.patch.object(v3_auth, 'Password')
def test_switch_keystone_provider_local(
self, mock_password, mock_token, mock_client
):
self.data = data_v3.generate_test_data(service_providers=True)
keystone_url = settings.OPENSTACK_KEYSTONE_URL
keystone_provider = 'localkeystone'
projects = [self.data.project_one, self.data.project_two]
domains = []
user = self.data.user
form_data = self.get_form_data(user)
auth_password = mock.Mock(
auth_url=settings.OPENSTACK_KEYSTONE_URL)
mock_password.return_value = auth_password
auth_password.get_access.return_value = self.data.unscoped_access_info
auth_token_domain = mock.Mock(auth_url=keystone_url)
auth_token_scoped_1 = mock.Mock(auth_url=keystone_url)
auth_token_unscoped_1 = mock.Mock(auth_url=keystone_url)
auth_token_scoped_2 = mock.Mock(auth_url=keystone_url)
auth_token_unscoped_2 = mock.Mock(auth_url=keystone_url)
mock_token.side_effect = [
auth_token_domain,
auth_token_scoped_1,
auth_token_unscoped_1,
auth_token_unscoped_2,
auth_token_scoped_2,
]
auth_token_domain.get_access.return_value = \
self.data.domain_scoped_access_info
for _auth in [auth_token_scoped_1, auth_token_unscoped_1,
auth_token_unscoped_2, auth_token_scoped_2]:
_auth.get_access.return_value = self.data.unscoped_access_info
client_domain = mock.Mock()
client_scoped_1 = mock.Mock()
client_unscoped_1 = mock.Mock()
client_unscoped_2 = mock.Mock()
client_scoped_2 = mock.Mock()
mock_client.side_effect = [
client_domain,
client_scoped_1,
client_unscoped_1,
client_unscoped_2,
client_scoped_2,
]
client_domain.projects.list.return_value = projects
client_unscoped_1.auth.domains.return_value = domains
client_unscoped_2.projects.list.return_value = projects
# Log in
url = reverse('login')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
# Switch
url = reverse('switch_keystone_provider', args=[keystone_provider])
form_data['keystone_provider'] = keystone_provider
response = self.client.get(url, form_data, follow=True)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
# Assert nothing has changed since we are going from local to local
self.assertEqual(self.client.session['keystone_provider_id'],
keystone_provider)
self.assertEqual(self.client.session['k2k_base_unscoped_token'],
self.data.unscoped_access_info.auth_token)
self.assertEqual(self.client.session['k2k_auth_url'],
settings.OPENSTACK_KEYSTONE_URL)
mock_password.assert_called_once_with(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
password=self.data.user.password,
username=self.data.user.name,
user_domain_name=DEFAULT_DOMAIN,
unscoped=True,
)
auth_password.get_access.assert_called_once_with(IsA(session.Session))
mock_client.assert_has_calls([
mock.call(
session=IsA(session.Session),
auth=auth_password,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_scoped_1,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_unscoped_2,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_unscoped_2,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_scoped_2,
)
])
self.assertEqual(5, mock_client.call_count)
client_domain.projects.list.assert_called_once_with(user=user.id)
client_scoped_1.assert_not_called()
client_unscoped_1.auth.domains.assert_called_once_with()
client_unscoped_2.projects.list.assert_called_once_with(user=user.id)
client_scoped_2.assert_not_called()
mock_token.assert_has_calls([
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
domain_name=DEFAULT_DOMAIN,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=self.data.project_one.id,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=None,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=None,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=self.data.project_one.id,
reauthenticate=False,
),
])
self.assertEqual(5, mock_token.call_count)
auth_token_domain.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_scoped_1.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_unscoped_1.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_unscoped_2.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_scoped_2.get_access.assert_called_once_with(
IsA(session.Session))
@mock.patch.object(client_v3, 'Client')
@mock.patch.object(v3_auth, 'Token')
@mock.patch.object(v3_auth, 'Password')
def test_switch_keystone_provider_local_fail(
self, mock_password, mock_token, mock_client
):
self.data = data_v3.generate_test_data(service_providers=True)
keystone_provider = 'localkeystone'
user = self.data.user
form_data = self.get_form_data(user)
# mock authenticate
auth_password = mock.Mock(
auth_url=settings.OPENSTACK_KEYSTONE_URL)
mock_password.return_value = auth_password
auth_password.get_access.return_value = self.data.unscoped_access_info
auth_token_domain = mock.Mock()
auth_token_project = mock.Mock()
auth_token_unscoped = mock.Mock()
mock_token.side_effect = [
auth_token_domain,
auth_token_project,
auth_token_unscoped,
]
auth_token_domain.get_access.return_value = \
self.data.domain_scoped_access_info
auth_token_project.get_access.return_value = \
self.data.unscoped_access_info
auth_token_unscoped.get_access.side_effect = \
keystone_exceptions.AuthorizationFailure
client_domain = mock.Mock()
client_project = mock.Mock()
mock_client.side_effect = [
client_domain,
client_project,
]
client_domain.projects.list.return_value = [
self.data.project_one, self.data.project_two
]
# Log in
url = reverse('login')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
# Switch
url = reverse('switch_keystone_provider', args=[keystone_provider])
form_data['keystone_provider'] = keystone_provider
response = self.client.get(url, form_data, follow=True)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
# Assert
self.assertEqual(self.client.session['keystone_provider_id'],
keystone_provider)
self.assertEqual(self.client.session['k2k_base_unscoped_token'],
self.data.unscoped_access_info.auth_token)
self.assertEqual(self.client.session['k2k_auth_url'],
settings.OPENSTACK_KEYSTONE_URL)
mock_password.assert_called_once_with(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
password=self.data.user.password,
username=self.data.user.name,
user_domain_name=DEFAULT_DOMAIN,
unscoped=True,
)
auth_password.get_access.assert_called_once_with(IsA(session.Session))
mock_client.assert_has_calls([
mock.call(
auth=auth_password,
session=IsA(session.Session),
),
mock.call(
auth=auth_token_project,
session=IsA(session.Session),
),
])
self.assertEqual(2, mock_client.call_count)
client_domain.projects.list.assert_called_once_with(user=user.id)
client_project.assert_not_called()
mock_token.assert_has_calls([
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
domain_name=DEFAULT_DOMAIN,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=self.data.project_one.id,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=None,
reauthenticate=False,
),
])
self.assertEqual(3, mock_token.call_count)
auth_token_domain.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_project.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_unscoped.get_access.assert_called_once_with(
IsA(session.Session))
class SwitchProviderTestsPublicURL(SwitchProviderTests):
interface = 'publicURL'
class SwitchProviderTestsInternalURL(SwitchProviderTests):
interface = 'internalURL'
class SwitchProviderTestsAdminURL(SwitchProviderTests):
interface = 'adminURL'
class OpenStackAuthTestsWebSSO(test.TestCase):
def setUp(self):
super().setUp()
self.data = data_v3.generate_test_data()
self.ks_client_module = client_v3
self.idp_id = uuid.uuid4().hex
self.idp_oidc_id = uuid.uuid4().hex
self.idp_saml2_id = uuid.uuid4().hex
settings.OPENSTACK_API_VERSIONS['identity'] = 3
settings.OPENSTACK_KEYSTONE_URL = 'http://localhost/identity/v3'
settings.WEBSSO_ENABLED = True
settings.WEBSSO_CHOICES = (
('credentials', 'Keystone Credentials'),
('oidc', 'OpenID Connect'),
('saml2', 'Security Assertion Markup Language'),
(self.idp_oidc_id, 'IDP OIDC'),
(self.idp_saml2_id, 'IDP SAML2')
)
settings.WEBSSO_IDP_MAPPING = {
self.idp_oidc_id: (self.idp_id, 'oidc'),
self.idp_saml2_id: (self.idp_id, 'saml2')
}
def test_login_form(self):
url = reverse('login')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'credentials')
self.assertContains(response, 'oidc')
self.assertContains(response, 'saml2')
self.assertContains(response, self.idp_oidc_id)
self.assertContains(response, self.idp_saml2_id)
def test_websso_redirect_by_protocol(self):
origin = 'http://testserver/auth/websso/'
protocol = 'oidc'
redirect_url = ('%s/auth/OS-FEDERATION/websso/%s?origin=%s' %
(settings.OPENSTACK_KEYSTONE_URL, protocol, origin))
form_data = {'auth_type': protocol,
'region': 'default'}
url = reverse('login')
# POST to the page and redirect to keystone.
response = self.client.post(url, form_data)
self.assertRedirects(response, redirect_url, status_code=302,
target_status_code=404)
def test_websso_redirect_by_idp(self):
origin = 'http://testserver/auth/websso/'
protocol = 'oidc'
redirect_url = ('%s/auth/OS-FEDERATION/identity_providers/%s'
'/protocols/%s/websso?origin=%s' %
(settings.OPENSTACK_KEYSTONE_URL, self.idp_id,
protocol, origin))
form_data = {'auth_type': self.idp_oidc_id,
'region': 'default'}
url = reverse('login')
# POST to the page and redirect to keystone.
response = self.client.post(url, form_data)
self.assertRedirects(response, redirect_url, status_code=302,
target_status_code=404)
@override_settings(WEBSSO_KEYSTONE_URL='http://keystone-public/identity/v3')
def test_websso_redirect_using_websso_keystone_url(self):
origin = 'http://testserver/auth/websso/'
protocol = 'oidc'
redirect_url = ('%s/auth/OS-FEDERATION/identity_providers/%s'
'/protocols/%s/websso?origin=%s' %
(settings.WEBSSO_KEYSTONE_URL, self.idp_id,
protocol, origin))
form_data = {'auth_type': self.idp_oidc_id,
'region': 'default'}
url = reverse('login')
# POST to the page and redirect to keystone.
response = self.client.post(url, form_data)
# verify that the request was sent back to WEBSSO_KEYSTONE_URL
self.assertRedirects(response, redirect_url, status_code=302,
target_status_code=404)
@mock.patch.object(client_v3, 'Client')
@mock.patch.object(v3_auth, 'Token')
def test_websso_login(self, mock_token, mock_client):
keystone_url = settings.OPENSTACK_KEYSTONE_URL
form_data = {
'token': self.data.federated_unscoped_access_info.auth_token,
}
auth_token_unscoped = mock.Mock(auth_url=keystone_url)
auth_token_scoped = mock.Mock(auth_url=keystone_url)
mock_token.side_effect = [
auth_token_unscoped,
auth_token_scoped,
]
auth_token_unscoped.get_access.return_value = \
self.data.federated_unscoped_access_info
auth_token_scoped.get_access.return_value = \
self.data.unscoped_access_info
client_unscoped_1 = mock.Mock()
client_unscoped_2 = mock.Mock()
client_scoped = mock.Mock()
mock_client.side_effect = [
client_unscoped_1,
client_unscoped_2,
client_scoped,
]
client_unscoped_1.auth.domains.return_value = []
client_unscoped_2.federation.projects.list.return_value = [
self.data.project_one, self.data.project_two
]
url = reverse('websso')
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
mock_token.assert_has_calls([
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.federated_unscoped_access_info.auth_token,
project_id=None,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token,
project_id=self.data.project_one.id,
reauthenticate=False,
),
])
self.assertEqual(2, mock_token.call_count)
auth_token_unscoped.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_scoped.get_access.assert_called_once_with(
IsA(session.Session))
mock_client.assert_has_calls([
mock.call(
auth=auth_token_unscoped,
session=IsA(session.Session),
),
mock.call(
auth=auth_token_unscoped,
session=IsA(session.Session),
),
mock.call(
auth=auth_token_scoped,
session=IsA(session.Session),
),
])
self.assertEqual(3, mock_client.call_count)
client_unscoped_1.auth.domains.assert_called_once_with()
client_unscoped_2.federation.projects.list.assert_called_once_with()
client_scoped.assert_not_called()
@mock.patch.object(client_v3, 'Client')
@mock.patch.object(v3_auth, 'Token')
@override_settings(
OPENSTACK_KEYSTONE_URL='http://auth.openstack.org/identity/v3')
def test_websso_login_with_auth_in_url(self, mock_token, mock_client):
keystone_url = settings.OPENSTACK_KEYSTONE_URL
form_data = {
'token': self.data.federated_unscoped_access_info.auth_token,
}
auth_token_unscoped = mock.Mock(auth_url=keystone_url)
auth_token_scoped = mock.Mock(auth_url=keystone_url)
mock_token.side_effect = [
auth_token_unscoped,
auth_token_scoped,
]
auth_token_unscoped.get_access.return_value = \
self.data.federated_unscoped_access_info
auth_token_scoped.get_access.return_value = \
self.data.unscoped_access_info
client_unscoped_1 = mock.Mock()
client_unscoped_2 = mock.Mock()
client_scoped = mock.Mock()
mock_client.side_effect = [
client_unscoped_1,
client_unscoped_2,
client_scoped,
]
client_unscoped_1.auth.domains.return_value = []
client_unscoped_2.federation.projects.list.return_value = [
self.data.project_one, self.data.project_two
]
url = reverse('websso')
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
# validate token flow
mock_token.assert_has_calls([
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.federated_unscoped_access_info.auth_token,
project_id=None,
reauthenticate=False,
),
mock.call(
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.federated_unscoped_access_info.auth_token,
project_id=self.data.project_one.id,
reauthenticate=False,
),
])
self.assertEqual(2, mock_token.call_count)
auth_token_unscoped.get_access.assert_called_once_with(
IsA(session.Session))
auth_token_scoped.get_access.assert_called_once_with(
IsA(session.Session))
mock_client.assert_has_calls([
mock.call(
session=IsA(session.Session),
auth=auth_token_unscoped,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_unscoped,
),
mock.call(
session=IsA(session.Session),
auth=auth_token_scoped,
),
])
self.assertEqual(3, mock_client.call_count)
client_unscoped_1.auth.domains.assert_called_once_with()
client_unscoped_2.federation.projects.list.assert_called_once_with()
client_scoped.assert_not_called()
@override_settings(WEBSSO_DEFAULT_REDIRECT=True)
@override_settings(WEBSSO_DEFAULT_REDIRECT_PROTOCOL='oidc')
@override_settings(
WEBSSO_DEFAULT_REDIRECT_REGION=settings.OPENSTACK_KEYSTONE_URL)
def test_websso_login_default_redirect(self):
origin = 'http://testserver/auth/websso/'
protocol = 'oidc'
redirect_url = ('%s/auth/OS-FEDERATION/websso/%s?origin=%s' %
(settings.OPENSTACK_KEYSTONE_URL, protocol, origin))
url = reverse('login')
# POST to the page and redirect to keystone.
response = self.client.get(url)
self.assertRedirects(response, redirect_url, status_code=302,
target_status_code=404)
@override_settings(WEBSSO_DEFAULT_REDIRECT=True)
@override_settings(WEBSSO_DEFAULT_REDIRECT_LOGOUT='http://idptest/logout')
def test_websso_logout_default_redirect(self):
settings.WEBSSO_DEFAULT_REDIRECT = True
settings.WEBSSO_DEFAULT_REDIRECT_LOGOUT = 'http://idptest/logout'
url = reverse('logout')
# POST to the page and redirect to logout method from idp.
response = self.client.get(url)
self.assertRedirects(response, settings.WEBSSO_DEFAULT_REDIRECT_LOGOUT,
status_code=302, target_status_code=301)
class OpenStackAuthTests(test.TestCase):
interface = None
def setUp(self):
super().setUp()
params = {
'OPENSTACK_API_VERSIONS': {'identity': 3},
'OPENSTACK_KEYSTONE_URL': "http://localhost/identity/v3",
}
if self.interface:
params['OPENSTACK_ENDPOINT_TYPE'] = self.interface
override = self.settings(**params)
override.enable()
self.addCleanup(override.disable)
self.data = data_v3.generate_test_data()
def get_form_data(self, user):
return {'region': "default",
'domain': DEFAULT_DOMAIN,
'password': user.password,
'username': user.name}
@mock.patch('keystoneauth1.identity.v3.Token.get_access')
@mock.patch('keystoneauth1.identity.v3.Password.get_access')
@mock.patch('keystoneclient.v3.client.Client')
def test_login(self, mock_client, mock_get_access, mock_get_access_token):
projects = [self.data.project_one, self.data.project_two]
user = self.data.user
form_data = self.get_form_data(user)
url = reverse('login')
mock_get_access.return_value = self.data.unscoped_access_info
mock_client.return_value.projects.list.return_value = projects
# TODO(stephenfin): What is the return type of this method?
mock_get_access_token.return_value = self.data.unscoped_access_info
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
@mock.patch('keystoneauth1.identity.v3.Password.get_access')
def test_invalid_credentials(self, mock_get_access):
user = self.data.user
form_data = self.get_form_data(user)
form_data['password'] = "invalid"
url = reverse('login')
mock_get_access.side_effect = keystone_exceptions.Unauthorized(401)
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertTemplateUsed(response, 'auth/login.html')
self.assertContains(response, "Invalid credentials.")
mock_get_access.assert_called_once_with(IsA(session.Session))
@mock.patch('keystoneauth1.identity.v3.Password.get_access')
def test_exception(self, mock_get_access):
user = self.data.user
form_data = self.get_form_data(user)
url = reverse('login')
mock_get_access.side_effect = \
keystone_exceptions.ClientException('error 500')
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertTemplateUsed(response, 'auth/login.html')
self.assertContains(response,
("An error occurred authenticating. Please try "
"again later."))
mock_get_access.assert_called_once_with(IsA(session.Session))
@mock.patch('keystoneauth1.identity.v3.Password.get_access')
def test_password_expired(self, mock_get_access):
user = self.data.user
form_data = self.get_form_data(user)
url = reverse('login')
class ExpiredException(keystone_exceptions.Unauthorized):
http_status = 401
message = ("The password is expired and needs to be changed"
" for user: %s." % user.id)
mock_get_access.side_effect = ExpiredException()
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
# This fails with TemplateDoesNotExist for some reason.
# self.assertRedirects(response, reverse('password', args=[user.id]))
# so instead we check for the redirect manually:
self.assertEqual(response.status_code, 302)
self.assertEqual(response.url, "/password/%s/" % user.id)
mock_get_access.assert_called_once_with(IsA(session.Session))
def test_login_form_multidomain(self):
override = self.settings(OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT=True)
override.enable()
self.addCleanup(override.disable)
url = reverse('login')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'id="id_domain"')
self.assertContains(response, 'name="domain"')
@override_settings(
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT=True,
OPENSTACK_KEYSTONE_DOMAIN_DROPDOWN=True,
OPENSTACK_KEYSTONE_DOMAIN_CHOICES=(('Default', 'Default'),)
)
def test_login_form_multidomain_dropdown(self):
url = reverse('login')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'id="id_domain"')
self.assertContains(response, 'name="domain"')
self.assertContains(response, 'option value="Default"')
@mock.patch.object(projects.ProjectManager, 'list')
def test_tenant_sorting(self, mock_project_list):
projects = [self.data.project_two, self.data.project_one]
expected_projects = [self.data.project_one, self.data.project_two]
user = self.data.user
mock_project_list.return_value = projects
project_list = utils.get_project_list(
user_id=user.id,
auth_url=settings.OPENSTACK_KEYSTONE_URL,
token=self.data.unscoped_access_info.auth_token)
self.assertEqual(project_list, expected_projects)
mock_project_list.assert_called_once()
@mock.patch.object(v3_auth.Token, 'get_access')
@mock.patch.object(password.PasswordPlugin, 'list_projects')
@mock.patch.object(v3_auth.Password, 'get_access')
def test_login_with_disabled_project(self, mock_get_access,
mock_project_list,
mock_get_access_token):
# Test to validate that authentication will not try to get
# scoped token for disabled project.
projects = [self.data.project_two, self.data.project_one]
user = self.data.user
mock_get_access.return_value = self.data.unscoped_access_info
mock_project_list.return_value = projects
mock_get_access_token.return_value = self.data.unscoped_access_info
form_data = self.get_form_data(user)
url = reverse('login')
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
mock_get_access.assert_called_once_with(IsA(session.Session))
mock_get_access_token.assert_called_with(IsA(session.Session))
mock_project_list.assert_called_once_with(
IsA(session.Session),
IsA(v3_auth.Password),
self.data.unscoped_access_info)
@mock.patch.object(v3_auth.Token, 'get_access')
@mock.patch.object(password.PasswordPlugin, 'list_projects')
@mock.patch.object(v3_auth.Password, 'get_access')
def test_no_enabled_projects(self, mock_get_access, mock_project_list,
mock_get_access_token):
projects = [self.data.project_two]
user = self.data.user
mock_get_access.return_value = self.data.unscoped_access_info
mock_project_list.return_value = projects
mock_get_access_token.return_value = self.data.unscoped_access_info
form_data = self.get_form_data(user)
url = reverse('login')
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
mock_get_access.assert_called_once_with(IsA(session.Session))
mock_get_access_token.assert_called_with(IsA(session.Session))
mock_project_list.assert_called_once_with(
IsA(session.Session),
IsA(v3_auth.Password),
self.data.unscoped_access_info)
@mock.patch.object(v3_auth.Token, 'get_access')
@mock.patch.object(password.PasswordPlugin, 'list_projects')
@mock.patch.object(v3_auth.Password, 'get_access')
def test_no_projects(self, mock_get_access, mock_project_list,
mock_get_access_token):
user = self.data.user
form_data = self.get_form_data(user)
mock_get_access.return_value = self.data.unscoped_access_info
mock_get_access_token.return_value = self.data.unscoped_access_info
mock_project_list.return_value = []
url = reverse('login')
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
mock_get_access.assert_called_once_with(IsA(session.Session))
mock_get_access_token.assert_called_with(IsA(session.Session))
mock_project_list.assert_called_once_with(
IsA(session.Session),
IsA(v3_auth.Password),
self.data.unscoped_access_info)
@mock.patch.object(v3_auth.Token, 'get_access')
@mock.patch.object(projects.ProjectManager, 'list')
@mock.patch.object(v3_auth.Password, 'get_access')
def test_fail_projects(self, mock_get_access, mock_project_list,
mock_get_access_token):
user = self.data.user
form_data = self.get_form_data(user)
mock_get_access.return_value = self.data.unscoped_access_info
mock_get_access_token.return_value = self.data.unscoped_access_info
mock_project_list.side_effect = keystone_exceptions.AuthorizationFailure
url = reverse('login')
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertTemplateUsed(response, 'auth/login.html')
self.assertContains(response,
'Unable to retrieve authorized projects.')
mock_get_access.assert_called_once_with(IsA(session.Session))
mock_get_access_token.assert_called_with(IsA(session.Session))
mock_project_list.assert_called_once_with(user=user.id)
@mock.patch.object(v3_auth.Token, 'get_access')
@mock.patch.object(password.PasswordPlugin, 'list_projects')
@mock.patch.object(v3_auth.Password, 'get_access')
def test_switch(self, mock_get_access, mock_project_list,
mock_get_access_token,
next=None):
project = self.data.project_two
projects = [self.data.project_one, self.data.project_two]
user = self.data.user
scoped = self.data.scoped_access_info
form_data = self.get_form_data(user)
mock_get_access.return_value = self.data.unscoped_access_info
mock_get_access_token.return_value = scoped
mock_project_list.return_value = projects
url = reverse('login')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
url = reverse('switch_tenants', args=[project.id])
scoped._project['id'] = self.data.project_two.id
if next:
form_data.update({auth.REDIRECT_FIELD_NAME: next})
response = self.client.get(url, form_data)
if next:
expected_url = next
self.assertEqual(response['location'], expected_url)
else:
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
self.assertEqual(self.client.session['token'].project['id'],
scoped.project_id)
mock_get_access.assert_called_once_with(IsA(session.Session))
mock_get_access_token.assert_called_with(IsA(session.Session))
mock_project_list.assert_called_once_with(
IsA(session.Session),
IsA(v3_auth.Password),
self.data.unscoped_access_info)
def test_switch_with_next(self):
self.test_switch(next='/next_url')
@mock.patch.object(v3_auth.Token, 'get_access')
@mock.patch.object(password.PasswordPlugin, 'list_projects')
@mock.patch.object(v3_auth.Password, 'get_access')
def test_switch_region(self, mock_get_access, mock_project_list,
mock_get_access_token,
next=None):
projects = [self.data.project_one, self.data.project_two]
user = self.data.user
scoped = self.data.unscoped_access_info
sc = self.data.service_catalog
form_data = self.get_form_data(user)
mock_get_access.return_value = self.data.unscoped_access_info
mock_get_access_token.return_value = scoped
mock_project_list.return_value = projects
url = reverse('login')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
old_region = sc.get_endpoints()['compute'][0]['region']
self.assertEqual(self.client.session['services_region'], old_region)
region = sc.get_endpoints()['compute'][1]['region']
url = reverse('switch_services_region', args=[region])
form_data['region_name'] = region
if next:
form_data.update({auth.REDIRECT_FIELD_NAME: next})
response = self.client.get(url, form_data)
if next:
expected_url = next
self.assertEqual(response['location'], expected_url)
else:
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
self.assertEqual(self.client.session['services_region'], region)
mock_get_access.assert_called_once_with(IsA(session.Session))
mock_get_access_token.assert_called_with(IsA(session.Session))
mock_project_list.assert_called_once_with(
IsA(session.Session),
IsA(v3_auth.Password),
self.data.unscoped_access_info)
def test_switch_region_with_next(self, next=None):
self.test_switch_region(next='/next_url')
@mock.patch.object(v3_auth.Token, 'get_access')
@mock.patch.object(password.PasswordPlugin, 'list_projects')
@mock.patch.object(v3_auth.Password, 'get_access')
def test_switch_system_scope(self, mock_get_access, mock_project_list,
mock_get_access_token,
next=None):
projects = []
user = self.data.user
scoped = self.data.unscoped_access_info
form_data = self.get_form_data(user)
mock_get_access.return_value = self.data.unscoped_access_info
mock_get_access_token.return_value = scoped
mock_project_list.return_value = projects
url = reverse('login')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
self.assertFalse(self.client.session['token'].system_scoped)
url = reverse('switch_system_scope')
if next:
form_data.update({auth.REDIRECT_FIELD_NAME: next})
response = self.client.get(url, form_data)
if next:
expected_url = next
self.assertEqual(response['location'], expected_url)
else:
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
self.assertNotEqual(False, self.client.session['token'].system_scoped)
mock_get_access.assert_called_once_with(IsA(session.Session))
mock_get_access_token.assert_called_with(IsA(session.Session))
mock_project_list.assert_called_once_with(
IsA(session.Session),
IsA(v3_auth.Password),
self.data.unscoped_access_info)
class OpenStackAuthTestsPublicURL(OpenStackAuthTests):
interface = 'publicURL'
class OpenStackAuthTestsInternalURL(OpenStackAuthTests):
interface = 'internalURL'
class OpenStackAuthTestsAdminURL(OpenStackAuthTests):
interface = 'adminURL'
| 38.83795 | 80 | 0.644538 | 6,483 | 56,082 | 5.246028 | 0.051828 | 0.040753 | 0.037489 | 0.0394 | 0.871185 | 0.840312 | 0.815231 | 0.803205 | 0.785592 | 0.764275 | 0 | 0.007802 | 0.268642 | 56,082 | 1,443 | 81 | 38.864865 | 0.821387 | 0.039264 | 0 | 0.764249 | 0 | 0 | 0.054911 | 0.015331 | 0 | 0 | 0 | 0.000693 | 0.163212 | 1 | 0.030225 | false | 0.056995 | 0.012953 | 0.002591 | 0.062176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
6b450fa805d135f4b92771b69c301659d2025f61 | 57,615 | py | Python | tests/components/tasmota/test_light.py | DatDraggy/core | 93572bfe029ad7d554db82fc74c6bfdd52bc835c | [
"Apache-2.0"
] | null | null | null | tests/components/tasmota/test_light.py | DatDraggy/core | 93572bfe029ad7d554db82fc74c6bfdd52bc835c | [
"Apache-2.0"
] | null | null | null | tests/components/tasmota/test_light.py | DatDraggy/core | 93572bfe029ad7d554db82fc74c6bfdd52bc835c | [
"Apache-2.0"
] | null | null | null | """The tests for the Tasmota light platform."""
import copy
import json
from unittest.mock import patch
from hatasmota.const import CONF_MAC
from hatasmota.utils import (
get_topic_stat_result,
get_topic_tele_state,
get_topic_tele_will,
)
from homeassistant.components import light
from homeassistant.components.light import SUPPORT_EFFECT, SUPPORT_TRANSITION
from homeassistant.components.tasmota.const import DEFAULT_PREFIX
from homeassistant.const import ATTR_ASSUMED_STATE, STATE_OFF, STATE_ON
from .test_common import (
DEFAULT_CONFIG,
help_test_availability,
help_test_availability_discovery_update,
help_test_availability_poll_state,
help_test_availability_when_connection_lost,
help_test_discovery_device_remove,
help_test_discovery_removal,
help_test_discovery_update_unchanged,
help_test_entity_id_update_discovery_update,
help_test_entity_id_update_subscriptions,
)
from tests.common import async_fire_mqtt_message
from tests.components.light import common
async def test_attributes_on_off(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 1
config["so"]["30"] = 1 # Enforce Home Assistant auto-discovery as light
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.attributes.get("effect_list") is None
assert state.attributes.get("min_mireds") is None
assert state.attributes.get("max_mireds") is None
assert state.attributes.get("supported_features") == 0
assert state.attributes.get("supported_color_modes") == ["onoff"]
assert state.attributes.get("color_mode") == "onoff"
async def test_attributes_dimmer_tuya(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (dimmer)
config["ty"] = 1 # Tuya device
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.attributes.get("effect_list") is None
assert state.attributes.get("min_mireds") is None
assert state.attributes.get("max_mireds") is None
assert state.attributes.get("supported_features") == 0
assert state.attributes.get("supported_color_modes") == ["brightness"]
assert state.attributes.get("color_mode") == "brightness"
async def test_attributes_dimmer(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (dimmer)
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.attributes.get("effect_list") is None
assert state.attributes.get("min_mireds") is None
assert state.attributes.get("max_mireds") is None
assert state.attributes.get("supported_features") == SUPPORT_TRANSITION
assert state.attributes.get("supported_color_modes") == ["brightness"]
assert state.attributes.get("color_mode") == "brightness"
async def test_attributes_ct(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 2 # 2 channel light (CW)
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.attributes.get("effect_list") is None
assert state.attributes.get("min_mireds") == 153
assert state.attributes.get("max_mireds") == 500
assert state.attributes.get("supported_features") == SUPPORT_TRANSITION
assert state.attributes.get("supported_color_modes") == ["color_temp"]
assert state.attributes.get("color_mode") == "color_temp"
async def test_attributes_ct_reduced(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 2 # 2 channel light (CW)
config["so"]["82"] = 1 # Reduced CT range
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.attributes.get("effect_list") is None
assert state.attributes.get("min_mireds") == 200
assert state.attributes.get("max_mireds") == 380
assert state.attributes.get("supported_features") == SUPPORT_TRANSITION
assert state.attributes.get("supported_color_modes") == ["color_temp"]
assert state.attributes.get("color_mode") == "color_temp"
async def test_attributes_rgb(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 3 # 3 channel light (RGB)
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.attributes.get("effect_list") == [
"None",
"Wake up",
"Cycle up",
"Cycle down",
"Random",
]
assert state.attributes.get("min_mireds") is None
assert state.attributes.get("max_mireds") is None
assert (
state.attributes.get("supported_features")
== SUPPORT_EFFECT | SUPPORT_TRANSITION
)
assert state.attributes.get("supported_color_modes") == ["rgb"]
assert state.attributes.get("color_mode") == "rgb"
async def test_attributes_rgbw(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 4 # 5 channel light (RGBW)
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.attributes.get("effect_list") == [
"None",
"Wake up",
"Cycle up",
"Cycle down",
"Random",
]
assert state.attributes.get("min_mireds") is None
assert state.attributes.get("max_mireds") is None
assert (
state.attributes.get("supported_features")
== SUPPORT_EFFECT | SUPPORT_TRANSITION
)
assert state.attributes.get("supported_color_modes") == ["rgb", "rgbw"]
assert state.attributes.get("color_mode") == "rgbw"
async def test_attributes_rgbww(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 5 # 5 channel light (RGBCW)
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.attributes.get("effect_list") == [
"None",
"Wake up",
"Cycle up",
"Cycle down",
"Random",
]
assert state.attributes.get("min_mireds") == 153
assert state.attributes.get("max_mireds") == 500
assert (
state.attributes.get("supported_features")
== SUPPORT_EFFECT | SUPPORT_TRANSITION
)
assert state.attributes.get("supported_color_modes") == ["color_temp", "rgb"]
assert state.attributes.get("color_mode") == "color_temp"
async def test_attributes_rgbww_reduced(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 5 # 5 channel light (RGBCW)
config["so"]["82"] = 1 # Reduced CT range
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.attributes.get("effect_list") == [
"None",
"Wake up",
"Cycle up",
"Cycle down",
"Random",
]
assert state.attributes.get("min_mireds") == 200
assert state.attributes.get("max_mireds") == 380
assert (
state.attributes.get("supported_features")
== SUPPORT_EFFECT | SUPPORT_TRANSITION
)
assert state.attributes.get("supported_color_modes") == ["color_temp", "rgb"]
assert state.attributes.get("color_mode") == "color_temp"
async def test_controlling_state_via_mqtt_on_off(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 1
config["so"]["30"] = 1 # Enforce Home Assistant auto-discovery as light
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
state = hass.states.get("light.test")
assert state.state == "unavailable"
assert not state.attributes.get(ATTR_ASSUMED_STATE)
assert "color_mode" not in state.attributes
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
assert not state.attributes.get(ATTR_ASSUMED_STATE)
assert "color_mode" not in state.attributes
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("color_mode") == "onoff"
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"OFF"}')
state = hass.states.get("light.test")
assert state.state == STATE_OFF
assert "color_mode" not in state.attributes
async_fire_mqtt_message(hass, "tasmota_49A3BC/stat/RESULT", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("color_mode") == "onoff"
async_fire_mqtt_message(hass, "tasmota_49A3BC/stat/RESULT", '{"POWER":"OFF"}')
state = hass.states.get("light.test")
assert state.state == STATE_OFF
assert "color_mode" not in state.attributes
async def test_controlling_state_via_mqtt_ct(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 2 # 2 channel light (CT)
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
state = hass.states.get("light.test")
assert state.state == "unavailable"
assert not state.attributes.get(ATTR_ASSUMED_STATE)
assert "color_mode" not in state.attributes
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
assert not state.attributes.get(ATTR_ASSUMED_STATE)
assert "color_mode" not in state.attributes
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"OFF"}')
state = hass.states.get("light.test")
assert state.state == STATE_OFF
assert "color_mode" not in state.attributes
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Dimmer":50}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("brightness") == 127.5
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","CT":300}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("color_temp") == 300
assert state.attributes.get("color_mode") == "color_temp"
# Tasmota will send "Color" also for CT light, this should be ignored
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Color":"255,128"}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("color_temp") == 300
assert state.attributes.get("brightness") == 127.5
assert state.attributes.get("color_mode") == "color_temp"
async def test_controlling_state_via_mqtt_rgbww(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 5 # 5 channel light (RGBCW)
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
state = hass.states.get("light.test")
assert state.state == "unavailable"
assert not state.attributes.get(ATTR_ASSUMED_STATE)
assert "color_mode" not in state.attributes
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
assert not state.attributes.get(ATTR_ASSUMED_STATE)
assert "color_mode" not in state.attributes
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"OFF"}')
state = hass.states.get("light.test")
assert state.state == STATE_OFF
assert "color_mode" not in state.attributes
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Dimmer":50}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("brightness") == 127.5
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(
hass,
"tasmota_49A3BC/tele/STATE",
'{"POWER":"ON","Color":"128,64,0","White":0}',
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("rgb_color") == (255, 128, 0)
assert state.attributes.get("color_mode") == "rgb"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","White":50}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert "white_value" not in state.attributes
# Setting white > 0 should clear the color
assert "rgb_color" not in state.attributes
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","CT":300}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("color_temp") == 300
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","White":0}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
# Setting white to 0 should clear the color_temp
assert "white_value" not in state.attributes
assert "color_temp" not in state.attributes
assert state.attributes.get("rgb_color") == (255, 128, 0)
assert state.attributes.get("color_mode") == "rgb"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Scheme":3}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("effect") == "Cycle down"
async_fire_mqtt_message(hass, "tasmota_49A3BC/stat/RESULT", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.state == STATE_ON
async_fire_mqtt_message(hass, "tasmota_49A3BC/stat/RESULT", '{"POWER":"OFF"}')
state = hass.states.get("light.test")
assert state.state == STATE_OFF
async def test_controlling_state_via_mqtt_rgbww_hex(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 5 # 5 channel light (RGBCW)
config["so"]["17"] = 0 # Hex color in state updates
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
state = hass.states.get("light.test")
assert state.state == "unavailable"
assert not state.attributes.get(ATTR_ASSUMED_STATE)
assert "color_mode" not in state.attributes
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
assert not state.attributes.get(ATTR_ASSUMED_STATE)
assert "color_mode" not in state.attributes
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"OFF"}')
state = hass.states.get("light.test")
assert state.state == STATE_OFF
assert "color_mode" not in state.attributes
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Dimmer":50}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("brightness") == 127.5
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Color":"804000","White":0}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("rgb_color") == (255, 128, 0)
assert state.attributes.get("color_mode") == "rgb"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Color":"0080400000"}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("rgb_color") == (0, 255, 128)
assert state.attributes.get("color_mode") == "rgb"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","White":50}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert "white_value" not in state.attributes
# Setting white > 0 should clear the color
assert "rgb_color" not in state.attributes
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","CT":300}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("color_temp") == 300
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","White":0}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
# Setting white to 0 should clear the white_value and color_temp
assert not state.attributes.get("white_value")
assert not state.attributes.get("color_temp")
assert state.attributes.get("color_mode") == "rgb"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Scheme":3}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("effect") == "Cycle down"
async_fire_mqtt_message(hass, "tasmota_49A3BC/stat/RESULT", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.state == STATE_ON
async_fire_mqtt_message(hass, "tasmota_49A3BC/stat/RESULT", '{"POWER":"OFF"}')
state = hass.states.get("light.test")
assert state.state == STATE_OFF
async def test_controlling_state_via_mqtt_rgbww_tuya(hass, mqtt_mock, setup_tasmota):
"""Test state update via MQTT."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 5 # 5 channel light (RGBCW)
config["ty"] = 1 # Tuya device
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
state = hass.states.get("light.test")
assert state.state == "unavailable"
assert not state.attributes.get(ATTR_ASSUMED_STATE)
assert "color_mode" not in state.attributes
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
assert not state.attributes.get(ATTR_ASSUMED_STATE)
assert "color_mode" not in state.attributes
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"OFF"}')
state = hass.states.get("light.test")
assert state.state == STATE_OFF
assert "color_mode" not in state.attributes
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Dimmer":50}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("brightness") == 127.5
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(
hass,
"tasmota_49A3BC/tele/STATE",
'{"POWER":"ON","Color":"128,64,0","White":0}',
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("rgb_color") == (255, 128, 0)
assert state.attributes.get("color_mode") == "rgb"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","White":50}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert "white_value" not in state.attributes
# Setting white > 0 should clear the color
assert "rgb_color" not in state.attributes
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","CT":300}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("color_temp") == 300
assert state.attributes.get("color_mode") == "color_temp"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","White":0}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
# Setting white to 0 should clear the white_value and color_temp
assert not state.attributes.get("white_value")
assert not state.attributes.get("color_temp")
assert state.attributes.get("color_mode") == "rgb"
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Scheme":3}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("effect") == "Cycle down"
async_fire_mqtt_message(hass, "tasmota_49A3BC/stat/RESULT", '{"POWER":"ON"}')
state = hass.states.get("light.test")
assert state.state == STATE_ON
async_fire_mqtt_message(hass, "tasmota_49A3BC/stat/RESULT", '{"POWER":"OFF"}')
state = hass.states.get("light.test")
assert state.state == STATE_OFF
async def test_sending_mqtt_commands_on_off(hass, mqtt_mock, setup_tasmota):
"""Test the sending MQTT commands."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 1
config["so"]["30"] = 1 # Enforce Home Assistant auto-discovery as light
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
await hass.async_block_till_done()
await hass.async_block_till_done()
mqtt_mock.async_publish.reset_mock()
# Turn the light on and verify MQTT message is sent
await common.async_turn_on(hass, "light.test")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Power1", "ON", 0, False
)
mqtt_mock.async_publish.reset_mock()
# Tasmota is not optimistic, the state should still be off
state = hass.states.get("light.test")
assert state.state == STATE_OFF
# Turn the light off and verify MQTT message is sent
await common.async_turn_off(hass, "light.test")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Power1", "OFF", 0, False
)
mqtt_mock.async_publish.reset_mock()
async def test_sending_mqtt_commands_rgbww_tuya(hass, mqtt_mock, setup_tasmota):
"""Test the sending MQTT commands."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 5 # 5 channel light (RGBCW)
config["ty"] = 1 # Tuya device
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
await hass.async_block_till_done()
await hass.async_block_till_done()
mqtt_mock.async_publish.reset_mock()
# Turn the light on and verify MQTT message is sent
await common.async_turn_on(hass, "light.test")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog", "NoDelay;Power1 ON", 0, False
)
mqtt_mock.async_publish.reset_mock()
# Tasmota is not optimistic, the state should still be off
state = hass.states.get("light.test")
assert state.state == STATE_OFF
# Turn the light off and verify MQTT message is sent
await common.async_turn_off(hass, "light.test")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog", "NoDelay;Power1 OFF", 0, False
)
mqtt_mock.async_publish.reset_mock()
# Turn the light on and verify MQTT messages are sent
await common.async_turn_on(hass, "light.test", brightness=192)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog", "NoDelay;Dimmer3 75", 0, False
)
async def test_sending_mqtt_commands_rgbw(hass, mqtt_mock, setup_tasmota):
"""Test the sending MQTT commands."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 4 # 4 channel light (RGBW)
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
await hass.async_block_till_done()
await hass.async_block_till_done()
mqtt_mock.async_publish.reset_mock()
# Turn the light on and verify MQTT message is sent
await common.async_turn_on(hass, "light.test")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog", "NoDelay;Power1 ON", 0, False
)
mqtt_mock.async_publish.reset_mock()
# Tasmota is not optimistic, the state should still be off
state = hass.states.get("light.test")
assert state.state == STATE_OFF
# Turn the light off and verify MQTT message is sent
await common.async_turn_off(hass, "light.test")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog", "NoDelay;Power1 OFF", 0, False
)
mqtt_mock.async_publish.reset_mock()
# Turn the light on and verify MQTT messages are sent
await common.async_turn_on(hass, "light.test", brightness=192)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog", "NoDelay;Dimmer 75", 0, False
)
mqtt_mock.async_publish.reset_mock()
# Set color when setting color
await common.async_turn_on(hass, "light.test", rgb_color=[128, 64, 32])
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Power1 ON;NoDelay;Color2 128,64,32",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Set color when setting brighter color than white
await common.async_turn_on(hass, "light.test", rgbw_color=[128, 64, 32, 16])
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Power1 ON;NoDelay;Color2 128,64,32",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Set white when setting brighter white than color
await common.async_turn_on(hass, "light.test", rgbw_color=[16, 64, 32, 128])
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Power1 ON;NoDelay;White 50",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
await common.async_turn_on(hass, "light.test", white_value=128)
# white_value should be ignored
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Power1 ON",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
await common.async_turn_on(hass, "light.test", effect="Random")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Power1 ON;NoDelay;Scheme 4",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
async def test_sending_mqtt_commands_rgbww(hass, mqtt_mock, setup_tasmota):
"""Test the sending MQTT commands."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 5 # 5 channel light (RGBCW)
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
await hass.async_block_till_done()
await hass.async_block_till_done()
mqtt_mock.async_publish.reset_mock()
# Turn the light on and verify MQTT message is sent
await common.async_turn_on(hass, "light.test")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog", "NoDelay;Power1 ON", 0, False
)
mqtt_mock.async_publish.reset_mock()
# Tasmota is not optimistic, the state should still be off
state = hass.states.get("light.test")
assert state.state == STATE_OFF
# Turn the light off and verify MQTT message is sent
await common.async_turn_off(hass, "light.test")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog", "NoDelay;Power1 OFF", 0, False
)
mqtt_mock.async_publish.reset_mock()
# Turn the light on and verify MQTT messages are sent
await common.async_turn_on(hass, "light.test", brightness=192)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog", "NoDelay;Dimmer 75", 0, False
)
mqtt_mock.async_publish.reset_mock()
await common.async_turn_on(hass, "light.test", rgb_color=[128, 64, 32])
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Power1 ON;NoDelay;Color2 128,64,32",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
await common.async_turn_on(hass, "light.test", color_temp=200)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Power1 ON;NoDelay;CT 200",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
await common.async_turn_on(hass, "light.test", white_value=128)
# white_value should be ignored
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Power1 ON",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
await common.async_turn_on(hass, "light.test", effect="Random")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Power1 ON;NoDelay;Scheme 4",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
async def test_sending_mqtt_commands_power_unlinked(hass, mqtt_mock, setup_tasmota):
"""Test the sending MQTT commands to a light with unlinked dimlevel and power."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (dimmer)
config["so"]["20"] = 1 # Update of Dimmer/Color/CT without turning power on
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
await hass.async_block_till_done()
await hass.async_block_till_done()
mqtt_mock.async_publish.reset_mock()
# Turn the light on and verify MQTT message is sent
await common.async_turn_on(hass, "light.test")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog", "NoDelay;Power1 ON", 0, False
)
mqtt_mock.async_publish.reset_mock()
# Tasmota is not optimistic, the state should still be off
state = hass.states.get("light.test")
assert state.state == STATE_OFF
# Turn the light off and verify MQTT message is sent
await common.async_turn_off(hass, "light.test")
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog", "NoDelay;Power1 OFF", 0, False
)
mqtt_mock.async_publish.reset_mock()
# Turn the light on and verify MQTT messages are sent; POWER should be sent
await common.async_turn_on(hass, "light.test", brightness=192)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Dimmer 75;NoDelay;Power1 ON",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
async def test_transition(hass, mqtt_mock, setup_tasmota):
"""Test transition commands."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 5 # 5 channel light (RGBCW)
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
await hass.async_block_till_done()
await hass.async_block_till_done()
mqtt_mock.async_publish.reset_mock()
# Dim the light from 0->100: Speed should be 4*2=8
await common.async_turn_on(hass, "light.test", brightness=255, transition=4)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 8;NoDelay;Dimmer 100",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Dim the light from 0->100: Speed should be capped at 40
await common.async_turn_on(hass, "light.test", brightness=255, transition=100)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 40;NoDelay;Dimmer 100",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Dim the light from 0->0: Speed should be 1
await common.async_turn_on(hass, "light.test", brightness=0, transition=100)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 1;NoDelay;Power1 OFF",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Dim the light from 0->50: Speed should be 4*2*2=16
await common.async_turn_on(hass, "light.test", brightness=128, transition=4)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 16;NoDelay;Dimmer 50",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Fake state update from the light
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Dimmer":50}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("brightness") == 127.5
# Dim the light from 50->0: Speed should be 6*2*2=24
await common.async_turn_off(hass, "light.test", transition=6)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 24;NoDelay;Power1 OFF",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Fake state update from the light
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Dimmer":100}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("brightness") == 255
# Dim the light from 100->0: Speed should be 0
await common.async_turn_off(hass, "light.test", transition=0)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 0;NoDelay;Power1 OFF",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Fake state update from the light
async_fire_mqtt_message(
hass,
"tasmota_49A3BC/tele/STATE",
'{"POWER":"ON","Dimmer":50, "Color":"0,255,0", "White":0}',
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("brightness") == 127.5
assert state.attributes.get("rgb_color") == (0, 255, 0)
# Set color of the light from 0,255,0 to 255,0,0 @ 50%: Speed should be 6*2*2=24
await common.async_turn_on(hass, "light.test", rgb_color=[255, 0, 0], transition=6)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 24;NoDelay;Power1 ON;NoDelay;Color2 255,0,0",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Fake state update from the light
async_fire_mqtt_message(
hass,
"tasmota_49A3BC/tele/STATE",
'{"POWER":"ON","Dimmer":100, "Color":"0,255,0"}',
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("brightness") == 255
assert state.attributes.get("rgb_color") == (0, 255, 0)
# Set color of the light from 0,255,0 to 255,0,0 @ 100%: Speed should be 6*2=12
await common.async_turn_on(hass, "light.test", rgb_color=[255, 0, 0], transition=6)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 12;NoDelay;Power1 ON;NoDelay;Color2 255,0,0",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Fake state update from the light
async_fire_mqtt_message(
hass,
"tasmota_49A3BC/tele/STATE",
'{"POWER":"ON","Dimmer":50, "CT":153, "White":50}',
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("brightness") == 127.5
assert state.attributes.get("color_temp") == 153
# Set color_temp of the light from 153 to 500 @ 50%: Speed should be 6*2*2=24
await common.async_turn_on(hass, "light.test", color_temp=500, transition=6)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 24;NoDelay;Power1 ON;NoDelay;CT 500",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Fake state update from the light
async_fire_mqtt_message(
hass, "tasmota_49A3BC/tele/STATE", '{"POWER":"ON","Dimmer":50, "CT":500}'
)
state = hass.states.get("light.test")
assert state.state == STATE_ON
assert state.attributes.get("brightness") == 127.5
assert state.attributes.get("color_temp") == 500
# Set color_temp of the light from 500 to 326 @ 50%: Speed should be 6*2*2*2=48->40
await common.async_turn_on(hass, "light.test", color_temp=326, transition=6)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 40;NoDelay;Power1 ON;NoDelay;CT 326",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
async def test_transition_fixed(hass, mqtt_mock, setup_tasmota):
"""Test transition commands."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 5 # 5 channel light (RGBCW)
config["so"]["117"] = 1 # fading at fixed duration instead of fixed slew rate
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
state = hass.states.get("light.test")
assert state.state == STATE_OFF
await hass.async_block_till_done()
await hass.async_block_till_done()
mqtt_mock.async_publish.reset_mock()
# Dim the light from 0->100: Speed should be 4*2=8
await common.async_turn_on(hass, "light.test", brightness=255, transition=4)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 8;NoDelay;Dimmer 100",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Dim the light from 0->100: Speed should be capped at 40
await common.async_turn_on(hass, "light.test", brightness=255, transition=100)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 40;NoDelay;Dimmer 100",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Dim the light from 0->0: Speed should be 4*2=8
await common.async_turn_on(hass, "light.test", brightness=0, transition=4)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 8;NoDelay;Power1 OFF",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Dim the light from 0->50: Speed should be 4*2=8
await common.async_turn_on(hass, "light.test", brightness=128, transition=4)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 1;NoDelay;Speed2 8;NoDelay;Dimmer 50",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Dim the light from 0->50: Speed should be 0
await common.async_turn_on(hass, "light.test", brightness=128, transition=0)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
"NoDelay;Fade2 0;NoDelay;Dimmer 50",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
async def test_relay_as_light(hass, mqtt_mock, setup_tasmota):
"""Test relay show up as light in light mode."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 1
config["so"]["30"] = 1 # Enforce Home Assistant auto-discovery as light
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
state = hass.states.get("switch.test")
assert state is None
state = hass.states.get("light.test")
assert state is not None
async def _test_split_light(hass, mqtt_mock, config, num_lights, num_switches):
"""Test multi-channel light split to single-channel dimmers."""
mac = config["mac"]
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
await hass.async_block_till_done()
await hass.async_block_till_done()
assert len(hass.states.async_entity_ids("switch")) == num_switches
assert len(hass.states.async_entity_ids("light")) == num_lights
lights = hass.states.async_entity_ids("light")
for idx, entity in enumerate(lights):
mqtt_mock.async_publish.reset_mock()
# Turn the light on and verify MQTT message is sent
await common.async_turn_on(hass, entity)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
f"NoDelay;Power{idx+num_switches+1} ON",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Dim the light and verify MQTT message is sent
await common.async_turn_on(hass, entity, brightness=(idx + 1) * 25.5)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
f"NoDelay;Channel{idx+num_switches+1} {(idx+1)*10}",
0,
False,
)
async def test_split_light(hass, mqtt_mock, setup_tasmota):
"""Test multi-channel light split to single-channel dimmers."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["rl"][1] = 2
config["rl"][2] = 2
config["rl"][3] = 2
config["rl"][4] = 2
config["so"][68] = 1 # Multi-channel PWM instead of a single light
config["lt_st"] = 5 # 5 channel light (RGBCW)
await _test_split_light(hass, mqtt_mock, config, 5, 0)
async def test_split_light2(hass, mqtt_mock, setup_tasmota):
"""Test multi-channel light split to single-channel dimmers."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 1
config["rl"][1] = 1
config["rl"][2] = 2
config["rl"][3] = 2
config["rl"][4] = 2
config["rl"][5] = 2
config["rl"][6] = 2
config["so"][68] = 1 # Multi-channel PWM instead of a single light
config["lt_st"] = 5 # 5 channel light (RGBCW)
await _test_split_light(hass, mqtt_mock, config, 5, 2)
async def _test_unlinked_light(hass, mqtt_mock, config, num_switches):
"""Test rgbww light split to rgb+ww."""
mac = config["mac"]
num_lights = 2
async_fire_mqtt_message(
hass,
f"{DEFAULT_PREFIX}/{mac}/config",
json.dumps(config),
)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, "tasmota_49A3BC/tele/LWT", "Online")
await hass.async_block_till_done()
await hass.async_block_till_done()
assert len(hass.states.async_entity_ids("switch")) == num_switches
assert len(hass.states.async_entity_ids("light")) == num_lights
lights = hass.states.async_entity_ids("light")
for idx, entity in enumerate(lights):
mqtt_mock.async_publish.reset_mock()
# Turn the light on and verify MQTT message is sent
await common.async_turn_on(hass, entity)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
f"NoDelay;Power{idx+num_switches+1} ON",
0,
False,
)
mqtt_mock.async_publish.reset_mock()
# Dim the light and verify MQTT message is sent
await common.async_turn_on(hass, entity, brightness=(idx + 1) * 25.5)
mqtt_mock.async_publish.assert_called_once_with(
"tasmota_49A3BC/cmnd/Backlog",
f"NoDelay;Dimmer{idx+1} {(idx+1)*10}",
0,
False,
)
async def test_unlinked_light(hass, mqtt_mock, setup_tasmota):
"""Test rgbww light split to rgb+ww."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["rl"][1] = 2
config["lk"] = 0 # RGB + white channels unlinked
config["lt_st"] = 5 # 5 channel light (RGBCW)
await _test_unlinked_light(hass, mqtt_mock, config, 0)
async def test_unlinked_light2(hass, mqtt_mock, setup_tasmota):
"""Test rgbww light split to rgb+ww."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 1
config["rl"][1] = 1
config["rl"][2] = 2
config["rl"][3] = 2
config["lk"] = 0 # RGB + white channels unlinked
config["lt_st"] = 5 # 5 channel light (RGBCW)
await _test_unlinked_light(hass, mqtt_mock, config, 2)
async def test_discovery_update_reconfigure_light(
hass, mqtt_mock, caplog, setup_tasmota
):
"""Test reconfigure of discovered light."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (Dimmer)
config2 = copy.deepcopy(DEFAULT_CONFIG)
config2["rl"][0] = 2
config2["lt_st"] = 3 # 3 channel light (RGB)
data1 = json.dumps(config)
data2 = json.dumps(config2)
# Simple dimmer
async_fire_mqtt_message(hass, f"{DEFAULT_PREFIX}/{config[CONF_MAC]}/config", data1)
await hass.async_block_till_done()
state = hass.states.get("light.test")
assert state.attributes.get("supported_features") == SUPPORT_TRANSITION
assert state.attributes.get("supported_color_modes") == ["brightness"]
# Reconfigure as RGB light
async_fire_mqtt_message(hass, f"{DEFAULT_PREFIX}/{config[CONF_MAC]}/config", data2)
await hass.async_block_till_done()
state = hass.states.get("light.test")
assert (
state.attributes.get("supported_features")
== SUPPORT_EFFECT | SUPPORT_TRANSITION
)
assert state.attributes.get("supported_color_modes") == ["rgb"]
async def test_availability_when_connection_lost(
hass, mqtt_client_mock, mqtt_mock, setup_tasmota
):
"""Test availability after MQTT disconnection."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (Dimmer)
await help_test_availability_when_connection_lost(
hass, mqtt_client_mock, mqtt_mock, light.DOMAIN, config
)
async def test_availability(hass, mqtt_mock, setup_tasmota):
"""Test availability."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (Dimmer)
await help_test_availability(hass, mqtt_mock, light.DOMAIN, config)
async def test_availability_discovery_update(hass, mqtt_mock, setup_tasmota):
"""Test availability discovery update."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (Dimmer)
await help_test_availability_discovery_update(hass, mqtt_mock, light.DOMAIN, config)
async def test_availability_poll_state(
hass, mqtt_client_mock, mqtt_mock, setup_tasmota
):
"""Test polling after MQTT connection (re)established."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (Dimmer)
poll_topic = "tasmota_49A3BC/cmnd/STATE"
await help_test_availability_poll_state(
hass, mqtt_client_mock, mqtt_mock, light.DOMAIN, config, poll_topic, ""
)
async def test_discovery_removal_light(hass, mqtt_mock, caplog, setup_tasmota):
"""Test removal of discovered light."""
config1 = copy.deepcopy(DEFAULT_CONFIG)
config1["rl"][0] = 2
config1["lt_st"] = 1 # 1 channel light (Dimmer)
config2 = copy.deepcopy(DEFAULT_CONFIG)
config2["rl"][0] = 0
config2["lt_st"] = 0
await help_test_discovery_removal(
hass, mqtt_mock, caplog, light.DOMAIN, config1, config2
)
async def test_discovery_removal_relay_as_light(hass, mqtt_mock, caplog, setup_tasmota):
"""Test removal of discovered relay as light."""
config1 = copy.deepcopy(DEFAULT_CONFIG)
config1["rl"][0] = 1
config1["so"]["30"] = 1 # Enforce Home Assistant auto-discovery as light
config2 = copy.deepcopy(DEFAULT_CONFIG)
config2["rl"][0] = 1
config2["so"]["30"] = 0 # Disable Home Assistant auto-discovery as light
await help_test_discovery_removal(
hass, mqtt_mock, caplog, light.DOMAIN, config1, config2
)
async def test_discovery_removal_relay_as_light2(
hass, mqtt_mock, caplog, setup_tasmota
):
"""Test removal of discovered relay as light."""
config1 = copy.deepcopy(DEFAULT_CONFIG)
config1["rl"][0] = 1
config1["so"]["30"] = 1 # Enforce Home Assistant auto-discovery as light
config2 = copy.deepcopy(DEFAULT_CONFIG)
config2["rl"][0] = 0
config2["so"]["30"] = 0 # Disable Home Assistant auto-discovery as light
await help_test_discovery_removal(
hass, mqtt_mock, caplog, light.DOMAIN, config1, config2
)
async def test_discovery_update_unchanged_light(hass, mqtt_mock, caplog, setup_tasmota):
"""Test update of discovered light."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (Dimmer)
with patch(
"homeassistant.components.tasmota.light.TasmotaLight.discovery_update"
) as discovery_update:
await help_test_discovery_update_unchanged(
hass, mqtt_mock, caplog, light.DOMAIN, config, discovery_update
)
async def test_discovery_device_remove(hass, mqtt_mock, setup_tasmota):
"""Test device registry remove."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (Dimmer)
unique_id = f"{DEFAULT_CONFIG['mac']}_light_light_0"
await help_test_discovery_device_remove(
hass, mqtt_mock, light.DOMAIN, unique_id, config
)
async def test_discovery_device_remove_relay_as_light(hass, mqtt_mock, setup_tasmota):
"""Test device registry remove."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 1
config["so"]["30"] = 1 # Enforce Home Assistant auto-discovery as light
unique_id = f"{DEFAULT_CONFIG['mac']}_light_relay_0"
await help_test_discovery_device_remove(
hass, mqtt_mock, light.DOMAIN, unique_id, config
)
async def test_entity_id_update_subscriptions(hass, mqtt_mock, setup_tasmota):
"""Test MQTT subscriptions are managed when entity_id is updated."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (Dimmer)
topics = [
get_topic_stat_result(config),
get_topic_tele_state(config),
get_topic_tele_will(config),
]
await help_test_entity_id_update_subscriptions(
hass, mqtt_mock, light.DOMAIN, config, topics
)
async def test_entity_id_update_discovery_update(hass, mqtt_mock, setup_tasmota):
"""Test MQTT discovery update when entity_id is updated."""
config = copy.deepcopy(DEFAULT_CONFIG)
config["rl"][0] = 2
config["lt_st"] = 1 # 1 channel light (Dimmer)
await help_test_entity_id_update_discovery_update(
hass, mqtt_mock, light.DOMAIN, config
)
| 35.630798 | 88 | 0.67656 | 7,869 | 57,615 | 4.724743 | 0.03266 | 0.053552 | 0.060518 | 0.071653 | 0.951155 | 0.942091 | 0.934802 | 0.923236 | 0.909384 | 0.897577 | 0 | 0.029164 | 0.194776 | 57,615 | 1,616 | 89 | 35.652847 | 0.772229 | 0.077358 | 0 | 0.78498 | 0 | 0.002372 | 0.208191 | 0.099587 | 0 | 0 | 0 | 0 | 0.209486 | 1 | 0 | false | 0 | 0.009486 | 0 | 0.009486 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6b484f08e3b714d58c38c7ac9d2c2a285aba69bf | 20,047 | py | Python | tests/test_accounts.py | evanblank3/ape-ledger | 83b15357715bc881d4c83349ee3b72f60bb8ba33 | [
"Apache-2.0"
] | null | null | null | tests/test_accounts.py | evanblank3/ape-ledger | 83b15357715bc881d4c83349ee3b72f60bb8ba33 | [
"Apache-2.0"
] | null | null | null | tests/test_accounts.py | evanblank3/ape-ledger | 83b15357715bc881d4c83349ee3b72f60bb8ba33 | [
"Apache-2.0"
] | null | null | null | import json
from pathlib import Path
from typing import Optional
import pytest
from ape import networks
from ape.api import TransactionAPI
from ape.api.networks import LOCAL_NETWORK_NAME
from ape_ethereum.ecosystem import DynamicFeeTransaction, StaticFeeTransaction
from eip712.messages import EIP712Message, EIP712Type
from eth_account.messages import SignableMessage
from ape_ledger.accounts import AccountContainer, LedgerAccount
from ape_ledger.exceptions import LedgerSigningError
from .conftest import TEST_ADDRESS, TEST_ALIAS, TEST_HD_PATH, assert_account
class Person(EIP712Type):
name: "string" # type: ignore # noqa: F821
wallet: "address" # type: ignore # noqa: F821
class Mail(EIP712Message):
_chainId_: "uint256" = 1 # type: ignore # noqa: F821
_name_: "string" = "Ether Mail" # type: ignore # noqa: F821
_verifyingContract_: "address" = "0xCcCCccccCCCCcCCCCCCcCcCccCcCCCcCcccccccC" # type: ignore # noqa: F821 E501
_version_: "string" = "1" # type: ignore # noqa: F821
sender: Person
receiver: Person
# noinspection PyArgumentList
TEST_SENDER = Person("Cow", "0xCD2a3d9F938E13CD947Ec05AbC7FE734Df8DD826") # type: ignore
# noinspection PyArgumentList
TEST_RECEIVER = Person("Bob", "0xB0B0b0b0b0b0B000000000000000000000000000") # type: ignore
# noinspection PyArgumentList
TEST_TYPED_MESSAGE = Mail(sender=TEST_SENDER, receiver=TEST_RECEIVER) # type: ignore
TEST_TXN_DATA = b"""`\x80`@R4\x80\x15a\x00\x10W`\x00\x80\xfd[P`\x00\x80T`\x01`\x01`\xa0\x1b\x03\x19\x163\x17\x90U`\x03\x80T`\xff\x19\x16`\x01\x17\x90Ua\x04\xa8\x80a\x00?`\x009`\x00\xf3\xfe`\x80`@R`\x046\x10a\x00pW`\x005`\xe0\x1c\x80c>G\xd6\xf3\x11a\x00NW\x80c>G\xd6\xf3\x14a\x00\xd4W\x80c\x8d\xa5\xcb[\x14a\x01\x19W\x80c\xb6\rB\x88\x14a\x01JW\x80c\xdc\r=\xff\x14a\x01RWa\x00pV[\x80c\x12)\xdc\x9e\x14a\x00uW\x80c#\x8d\xaf\xe0\x14a\x00\xa3W\x80c<\xcf\xd6\x0b\x14a\x00\xccW[`\x00\x80\xfd[4\x80\x15a\x00\x81W`\x00\x80\xfd[Pa\x00\xa1`\x04\x806\x03` \x81\x10\x15a\x00\x98W`\x00\x80\xfd[P5\x15\x15a\x01|V[\x00[4\x80\x15a\x00\xafW`\x00\x80\xfd[Pa\x00\xb8a\x01\xdcV[`@\x80Q\x91\x15\x15\x82RQ\x90\x81\x90\x03` \x01\x90\xf3[a\x00\xa1a\x01\xe5V[4\x80\x15a\x00\xe0W`\x00\x80\xfd[Pa\x01\x07`\x04\x806\x03` \x81\x10\x15a\x00\xf7W`\x00\x80\xfd[P5`\x01`\x01`\xa0\x1b\x03\x16a\x02\xddV[`@\x80Q\x91\x82RQ\x90\x81\x90\x03` \x01\x90\xf3[4\x80\x15a\x01%W`\x00\x80\xfd[Pa\x01.a\x02\xefV[`@\x80Q`\x01`\x01`\xa0\x1b\x03\x90\x92\x16\x82RQ\x90\x81\x90\x03` \x01\x90\xf3[a\x00\xa1a\x02\xfeV[4\x80\x15a\x01^W`\x00\x80\xfd[Pa\x01.`\x04\x806\x03` \x81\x10\x15a\x01uW`\x00\x80\xfd[P5a\x03\xa4V[`\x00T`\x01`\x01`\xa0\x1b\x03\x163\x14a\x01\xc9W`@\x80QbF\x1b\xcd`\xe5\x1b\x81R` `\x04\x82\x01R`\x0b`$\x82\x01Rj\x08X]]\x1a\x1b\xdc\x9a^\x99Y`\xaa\x1b`D\x82\x01R\x90Q\x90\x81\x90\x03`d\x01\x90\xfd[`\x03\x80T`\xff\x19\x16\x91\x15\x15\x91\x90\x91\x17\x90UV[`\x03T`\xff\x16\x81V[`\x00T`\x01`\x01`\xa0\x1b\x03\x163\x14a\x022W`@\x80QbF\x1b\xcd`\xe5\x1b\x81R` `\x04\x82\x01R`\x0b`$\x82\x01Rj\x08X]]\x1a\x1b\xdc\x9a^\x99Y`\xaa\x1b`D\x82\x01R\x90Q\x90\x81\x90\x03`d\x01\x90\xfd[`\x03T`\xff\x16a\x02AW`\x00\x80\xfd[`@Q3\x90G\x80\x15a\x08\xfc\x02\x91`\x00\x81\x81\x81\x85\x88\x88\xf1\x93PPPP\x15\x80\x15a\x02mW=`\x00\x80>=`\x00\xfd[P`\x00[`\x02T\x81\x10\x15a\x02\xbcW`\x00`\x02\x82\x81T\x81\x10a\x02\x8aW\xfe[`\x00\x91\x82R` \x80\x83 \x90\x91\x01T`\x01`\x01`\xa0\x1b\x03\x16\x82R`\x01\x90\x81\x90R`@\x82 \x91\x90\x91U\x91\x90\x91\x01\x90Pa\x02qV[P`@\x80Q`\x00\x81R` \x81\x01\x91\x82\x90RQa\x02\xda\x91`\x02\x91a\x03\xcbV[PV[`\x01` R`\x00\x90\x81R`@\x90 T\x81V[`\x00T`\x01`\x01`\xa0\x1b\x03\x16\x81V[`\x03T`\xff\x16a\x03\rW`\x00\x80\xfd[`\x004\x11a\x03LW`@QbF\x1b\xcd`\xe5\x1b\x81R`\x04\x01\x80\x80` \x01\x82\x81\x03\x82R`#\x81R` \x01\x80a\x04P`#\x919`@\x01\x91PP`@Q\x80\x91\x03\x90\xfd[3`\x00\x81\x81R`\x01` \x81\x90R`@\x82 \x80T4\x01\x90U`\x02\x80T\x91\x82\x01\x81U\x90\x91R\x7f@W\x87\xfa\x12\xa8#\xe0\xf2\xb7c\x1c\xc4\x1b;\xa8\x82\x8b3!\xca\x81\x11\x11\xfau\xcd:\xa3\xbbZ\xce\x01\x80T`\x01`\x01`\xa0\x1b\x03\x19\x16\x90\x91\x17\x90UV[`\x02\x81\x81T\x81\x10a\x03\xb1W\xfe[`\x00\x91\x82R` \x90\x91 \x01T`\x01`\x01`\xa0\x1b\x03\x16\x90P\x81V[\x82\x80T\x82\x82U\x90`\x00R` `\x00 \x90\x81\x01\x92\x82\x15a\x04 W\x91` \x02\x82\x01[\x82\x81\x11\x15a\x04 W\x82Q\x82T`\x01`\x01`\xa0\x1b\x03\x19\x16`\x01`\x01`\xa0\x1b\x03\x90\x91\x16\x17\x82U` \x90\x92\x01\x91`\x01\x90\x91\x01\x90a\x03\xebV[Pa\x04,\x92\x91Pa\x040V[P\x90V[[\x80\x82\x11\x15a\x04,W\x80T`\x01`\x01`\xa0\x1b\x03\x19\x16\x81U`\x01\x01a\x041V\xfeFund amount must be greater than 0.\xa2dipfsX"\x12 \\.\xe1\xb9\xbd\xde\x0b.`io)\xf2\xb6\xf1\xd5\xce\x1d\xd7_\x8b\xfd\xf6\xfbT\x14#\x1a\x12\xcf\xa7\xffdsolcC\x00\x06\x0c\x003""" # noqa: E501
def _build_transaction(txn: TransactionAPI, receiver: Optional[str] = None) -> TransactionAPI:
txn.chain_id = 579875
txn.nonce = 0
txn.gas_limit = 2
txn.value = 10000000000
txn.data = TEST_TXN_DATA
if receiver:
txn.receiver = receiver
# Values not part of RLP
txn.sender = TEST_SENDER.wallet
return txn
def _create_static_fee_txn(receiver: Optional[str] = None) -> StaticFeeTransaction:
txn = StaticFeeTransaction()
txn = _build_transaction(txn, receiver=receiver) # type: ignore
txn.gas_price = 1
return txn # type: ignore
def _create_dynamic_fee_txn(receiver: Optional[str] = None) -> DynamicFeeTransaction:
txn = DynamicFeeTransaction()
txn = _build_transaction(txn, receiver=receiver) # type: ignore
txn.max_fee = 300000000
txn.max_priority_fee = 10000000
return txn # type: ignore
TEST_STATIC_FEE_TXN = _create_static_fee_txn()
TEST_STATIC_FEE_TXN_WITH_RECEIVER = _create_static_fee_txn(receiver=TEST_RECEIVER.wallet)
TEST_DYNAMIC_FEE_TXN = _create_dynamic_fee_txn()
TEST_DYNAMIC_FEE_TXN_WITH_RECEIVER = _create_dynamic_fee_txn(receiver=TEST_RECEIVER.wallet)
def create_account(account_path, hd_path):
with open(account_path, "w") as account_file:
account_data = {"address": TEST_ADDRESS, "hdpath": hd_path}
account_file.writelines(json.dumps(account_data))
@pytest.fixture
def account_connection(mocker, ledger_account):
patch = mocker.patch("ape_ledger.accounts.connect_to_ethereum_account")
patch.return_value = ledger_account
return patch
@pytest.fixture(autouse=True)
def isolated_file_system(runner):
with runner.isolated_filesystem():
yield
@pytest.fixture
def account(mock_container):
create_account("account.json", TEST_HD_PATH)
with networks.parse_network_choice(f"ethereum:{LOCAL_NETWORK_NAME}:test"):
yield LedgerAccount(container=mock_container, account_file_path=Path("account.json"))
@pytest.fixture
def sign_txn_spy(mocker):
spy = mocker.spy(LedgerAccount, "_client")
spy.sign_transaction.return_value = (0, b"r", b"s")
return spy
class TestAccountContainer:
def test_save_account(self, mock_container):
container = AccountContainer(data_folder=Path("."), account_type=LedgerAccount)
container.save_account(TEST_ALIAS, TEST_ADDRESS, TEST_HD_PATH)
assert_account(f"{TEST_ALIAS}.json", expected_hdpath=TEST_HD_PATH)
class TestLedgerAccount:
def test_address_returns_address_from_file(self, account):
assert account.address.lower() == TEST_ADDRESS.lower()
def test_hdpath_returns_address_from_file(self, account):
assert account.hdpath.path == TEST_HD_PATH
def test_sign_message_personal(self, mocker, account, account_connection):
spy = mocker.spy(LedgerAccount, "_client")
spy.sign_personal_message.return_value = (0, b"r", b"s")
message = SignableMessage(
version=b"E", header=b"thereum Signed Message:\n6", body=b"I\xe2\x99\xa5SF"
)
actual_v, actual_r, actual_s = account.sign_message(message)
assert actual_v == 1
assert actual_r == b"r"
assert actual_s == b"s"
spy.sign_personal_message.assert_called_once_with(message.body)
def test_sign_message_typed(self, mocker, account, account_connection):
spy = mocker.spy(LedgerAccount, "_client")
spy.sign_typed_data.return_value = (0, b"r", b"s")
message = TEST_TYPED_MESSAGE.signable_message
actual_v, actual_r, actual_s = account.sign_message(message)
assert actual_v == 1
assert actual_r == b"r"
assert actual_s == b"s"
spy.sign_typed_data.assert_called_once_with(message.header, message.body)
def test_sign_message_unsupported(self, account, account_connection):
unsupported_version = b"X"
message = SignableMessage(
version=unsupported_version,
header=b"thereum Signed Message:\n6",
body=b"I\xe2\x99\xa5SF",
)
with pytest.raises(LedgerSigningError) as err:
account.sign_message(message)
actual = str(err.value)
expected = f"Unsupported message-signing specification, (version={unsupported_version})."
assert actual == expected
@pytest.mark.parametrize(
"txn,expected",
(
(
TEST_STATIC_FEE_TXN,
"f904fa800102808502540be400b904e7608060405234801561001057600080fd5b50600080546001600160a01b031916331790556003805460ff191660011790556104a88061003f6000396000f3fe6080604052600436106100705760003560e01c80633e47d6f31161004e5780633e47d6f3146100d45780638da5cb5b14610119578063b60d42881461014a578063dc0d3dff1461015257610070565b80631229dc9e14610075578063238dafe0146100a35780633ccfd60b146100cc575b600080fd5b34801561008157600080fd5b506100a16004803603602081101561009857600080fd5b5035151561017c565b005b3480156100af57600080fd5b506100b86101dc565b604080519115158252519081900360200190f35b6100a16101e5565b3480156100e057600080fd5b50610107600480360360208110156100f757600080fd5b50356001600160a01b03166102dd565b60408051918252519081900360200190f35b34801561012557600080fd5b5061012e6102ef565b604080516001600160a01b039092168252519081900360200190f35b6100a16102fe565b34801561015e57600080fd5b5061012e6004803603602081101561017557600080fd5b50356103a4565b6000546001600160a01b031633146101c9576040805162461bcd60e51b815260206004820152600b60248201526a08585d5d1a1bdc9a5e995960aa1b604482015290519081900360640190fd5b6003805460ff1916911515919091179055565b60035460ff1681565b6000546001600160a01b03163314610232576040805162461bcd60e51b815260206004820152600b60248201526a08585d5d1a1bdc9a5e995960aa1b604482015290519081900360640190fd5b60035460ff1661024157600080fd5b60405133904780156108fc02916000818181858888f1935050505015801561026d573d6000803e3d6000fd5b5060005b6002548110156102bc5760006002828154811061028a57fe5b60009182526020808320909101546001600160a01b031682526001908190526040822091909155919091019050610271565b5060408051600081526020810191829052516102da916002916103cb565b50565b60016020526000908152604090205481565b6000546001600160a01b031681565b60035460ff1661030d57600080fd5b6000341161034c5760405162461bcd60e51b81526004018080602001828103825260238152602001806104506023913960400191505060405180910390fd5b33600081815260016020819052604082208054340190556002805491820181559091527f405787fa12a823e0f2b7631cc41b3ba8828b3321ca811111fa75cd3aa3bb5ace0180546001600160a01b0319169091179055565b600281815481106103b157fe5b6000918252602090912001546001600160a01b0316905081565b828054828255906000526020600020908101928215610420579160200282015b8281111561042057825182546001600160a01b0319166001600160a01b039091161782556020909201916001909101906103eb565b5061042c929150610430565b5090565b5b8082111561042c5780546001600160a01b031916815560010161043156fe46756e6420616d6f756e74206d7573742062652067726561746572207468616e20302ea26469706673582212205c2ee1b9bdde0b2e60696f29f2b6f1d5ce1dd75f8bfdf6fb5414231a12cfa7ff64736f6c634300060c00338308d9238080", # noqa: E501
),
(
TEST_STATIC_FEE_TXN_WITH_RECEIVER,
"f9050e80010294b0b0b0b0b0b0b0000000000000000000000000008502540be400b904e7608060405234801561001057600080fd5b50600080546001600160a01b031916331790556003805460ff191660011790556104a88061003f6000396000f3fe6080604052600436106100705760003560e01c80633e47d6f31161004e5780633e47d6f3146100d45780638da5cb5b14610119578063b60d42881461014a578063dc0d3dff1461015257610070565b80631229dc9e14610075578063238dafe0146100a35780633ccfd60b146100cc575b600080fd5b34801561008157600080fd5b506100a16004803603602081101561009857600080fd5b5035151561017c565b005b3480156100af57600080fd5b506100b86101dc565b604080519115158252519081900360200190f35b6100a16101e5565b3480156100e057600080fd5b50610107600480360360208110156100f757600080fd5b50356001600160a01b03166102dd565b60408051918252519081900360200190f35b34801561012557600080fd5b5061012e6102ef565b604080516001600160a01b039092168252519081900360200190f35b6100a16102fe565b34801561015e57600080fd5b5061012e6004803603602081101561017557600080fd5b50356103a4565b6000546001600160a01b031633146101c9576040805162461bcd60e51b815260206004820152600b60248201526a08585d5d1a1bdc9a5e995960aa1b604482015290519081900360640190fd5b6003805460ff1916911515919091179055565b60035460ff1681565b6000546001600160a01b03163314610232576040805162461bcd60e51b815260206004820152600b60248201526a08585d5d1a1bdc9a5e995960aa1b604482015290519081900360640190fd5b60035460ff1661024157600080fd5b60405133904780156108fc02916000818181858888f1935050505015801561026d573d6000803e3d6000fd5b5060005b6002548110156102bc5760006002828154811061028a57fe5b60009182526020808320909101546001600160a01b031682526001908190526040822091909155919091019050610271565b5060408051600081526020810191829052516102da916002916103cb565b50565b60016020526000908152604090205481565b6000546001600160a01b031681565b60035460ff1661030d57600080fd5b6000341161034c5760405162461bcd60e51b81526004018080602001828103825260238152602001806104506023913960400191505060405180910390fd5b33600081815260016020819052604082208054340190556002805491820181559091527f405787fa12a823e0f2b7631cc41b3ba8828b3321ca811111fa75cd3aa3bb5ace0180546001600160a01b0319169091179055565b600281815481106103b157fe5b6000918252602090912001546001600160a01b0316905081565b828054828255906000526020600020908101928215610420579160200282015b8281111561042057825182546001600160a01b0319166001600160a01b039091161782556020909201916001909101906103eb565b5061042c929150610430565b5090565b5b8082111561042c5780546001600160a01b031916815560010161043156fe46756e6420616d6f756e74206d7573742062652067726561746572207468616e20302ea26469706673582212205c2ee1b9bdde0b2e60696f29f2b6f1d5ce1dd75f8bfdf6fb5414231a12cfa7ff64736f6c634300060c00338308d9238080", # noqa: E501
),
(
TEST_DYNAMIC_FEE_TXN,
"02f905018308d92380839896808411e1a30002808502540be400b904e7608060405234801561001057600080fd5b50600080546001600160a01b031916331790556003805460ff191660011790556104a88061003f6000396000f3fe6080604052600436106100705760003560e01c80633e47d6f31161004e5780633e47d6f3146100d45780638da5cb5b14610119578063b60d42881461014a578063dc0d3dff1461015257610070565b80631229dc9e14610075578063238dafe0146100a35780633ccfd60b146100cc575b600080fd5b34801561008157600080fd5b506100a16004803603602081101561009857600080fd5b5035151561017c565b005b3480156100af57600080fd5b506100b86101dc565b604080519115158252519081900360200190f35b6100a16101e5565b3480156100e057600080fd5b50610107600480360360208110156100f757600080fd5b50356001600160a01b03166102dd565b60408051918252519081900360200190f35b34801561012557600080fd5b5061012e6102ef565b604080516001600160a01b039092168252519081900360200190f35b6100a16102fe565b34801561015e57600080fd5b5061012e6004803603602081101561017557600080fd5b50356103a4565b6000546001600160a01b031633146101c9576040805162461bcd60e51b815260206004820152600b60248201526a08585d5d1a1bdc9a5e995960aa1b604482015290519081900360640190fd5b6003805460ff1916911515919091179055565b60035460ff1681565b6000546001600160a01b03163314610232576040805162461bcd60e51b815260206004820152600b60248201526a08585d5d1a1bdc9a5e995960aa1b604482015290519081900360640190fd5b60035460ff1661024157600080fd5b60405133904780156108fc02916000818181858888f1935050505015801561026d573d6000803e3d6000fd5b5060005b6002548110156102bc5760006002828154811061028a57fe5b60009182526020808320909101546001600160a01b031682526001908190526040822091909155919091019050610271565b5060408051600081526020810191829052516102da916002916103cb565b50565b60016020526000908152604090205481565b6000546001600160a01b031681565b60035460ff1661030d57600080fd5b6000341161034c5760405162461bcd60e51b81526004018080602001828103825260238152602001806104506023913960400191505060405180910390fd5b33600081815260016020819052604082208054340190556002805491820181559091527f405787fa12a823e0f2b7631cc41b3ba8828b3321ca811111fa75cd3aa3bb5ace0180546001600160a01b0319169091179055565b600281815481106103b157fe5b6000918252602090912001546001600160a01b0316905081565b828054828255906000526020600020908101928215610420579160200282015b8281111561042057825182546001600160a01b0319166001600160a01b039091161782556020909201916001909101906103eb565b5061042c929150610430565b5090565b5b8082111561042c5780546001600160a01b031916815560010161043156fe46756e6420616d6f756e74206d7573742062652067726561746572207468616e20302ea26469706673582212205c2ee1b9bdde0b2e60696f29f2b6f1d5ce1dd75f8bfdf6fb5414231a12cfa7ff64736f6c634300060c0033c0", # noqa: E501
),
(
TEST_DYNAMIC_FEE_TXN_WITH_RECEIVER,
"02f905158308d92380839896808411e1a3000294b0b0b0b0b0b0b0000000000000000000000000008502540be400b904e7608060405234801561001057600080fd5b50600080546001600160a01b031916331790556003805460ff191660011790556104a88061003f6000396000f3fe6080604052600436106100705760003560e01c80633e47d6f31161004e5780633e47d6f3146100d45780638da5cb5b14610119578063b60d42881461014a578063dc0d3dff1461015257610070565b80631229dc9e14610075578063238dafe0146100a35780633ccfd60b146100cc575b600080fd5b34801561008157600080fd5b506100a16004803603602081101561009857600080fd5b5035151561017c565b005b3480156100af57600080fd5b506100b86101dc565b604080519115158252519081900360200190f35b6100a16101e5565b3480156100e057600080fd5b50610107600480360360208110156100f757600080fd5b50356001600160a01b03166102dd565b60408051918252519081900360200190f35b34801561012557600080fd5b5061012e6102ef565b604080516001600160a01b039092168252519081900360200190f35b6100a16102fe565b34801561015e57600080fd5b5061012e6004803603602081101561017557600080fd5b50356103a4565b6000546001600160a01b031633146101c9576040805162461bcd60e51b815260206004820152600b60248201526a08585d5d1a1bdc9a5e995960aa1b604482015290519081900360640190fd5b6003805460ff1916911515919091179055565b60035460ff1681565b6000546001600160a01b03163314610232576040805162461bcd60e51b815260206004820152600b60248201526a08585d5d1a1bdc9a5e995960aa1b604482015290519081900360640190fd5b60035460ff1661024157600080fd5b60405133904780156108fc02916000818181858888f1935050505015801561026d573d6000803e3d6000fd5b5060005b6002548110156102bc5760006002828154811061028a57fe5b60009182526020808320909101546001600160a01b031682526001908190526040822091909155919091019050610271565b5060408051600081526020810191829052516102da916002916103cb565b50565b60016020526000908152604090205481565b6000546001600160a01b031681565b60035460ff1661030d57600080fd5b6000341161034c5760405162461bcd60e51b81526004018080602001828103825260238152602001806104506023913960400191505060405180910390fd5b33600081815260016020819052604082208054340190556002805491820181559091527f405787fa12a823e0f2b7631cc41b3ba8828b3321ca811111fa75cd3aa3bb5ace0180546001600160a01b0319169091179055565b600281815481106103b157fe5b6000918252602090912001546001600160a01b0316905081565b828054828255906000526020600020908101928215610420579160200282015b8281111561042057825182546001600160a01b0319166001600160a01b039091161782556020909201916001909101906103eb565b5061042c929150610430565b5090565b5b8082111561042c5780546001600160a01b031916815560010161043156fe46756e6420616d6f756e74206d7573742062652067726561746572207468616e20302ea26469706673582212205c2ee1b9bdde0b2e60696f29f2b6f1d5ce1dd75f8bfdf6fb5414231a12cfa7ff64736f6c634300060c0033c0", # noqa: E501
),
),
)
def test_sign_transaction(self, txn, expected, sign_txn_spy, account, account_connection):
account.sign_transaction(txn)
actual = sign_txn_spy.sign_transaction.call_args[0][0].hex()
assert actual == expected
| 104.411458 | 3,243 | 0.855839 | 1,485 | 20,047 | 11.400673 | 0.246465 | 0.004962 | 0.006379 | 0.008506 | 0.124454 | 0.10378 | 0.079858 | 0.069817 | 0.053987 | 0.044654 | 0 | 0.550536 | 0.069187 | 20,047 | 191 | 3,244 | 104.958115 | 0.356752 | 0.020302 | 0 | 0.197183 | 0 | 0.007042 | 0.71669 | 0.696446 | 0 | 1 | 0.006425 | 0 | 0.098592 | 1 | 0.105634 | false | 0 | 0.091549 | 0 | 0.316901 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
860b1ec73102a78a7225b550385675c146c22435 | 1,524 | py | Python | sportsipy/decorators.py | MArtinherz/sportsipy | 24f4c1d5e3bb8ecc56e21568961588491e9cfd2a | [
"MIT"
] | 221 | 2018-05-15T19:48:03.000Z | 2021-01-05T15:36:21.000Z | sportsipy/decorators.py | MArtinherz/sportsipy | 24f4c1d5e3bb8ecc56e21568961588491e9cfd2a | [
"MIT"
] | 502 | 2018-07-25T03:09:26.000Z | 2021-01-06T16:07:02.000Z | sportsipy/decorators.py | MArtinherz/sportsipy | 24f4c1d5e3bb8ecc56e21568961588491e9cfd2a | [
"MIT"
] | 72 | 2021-01-21T13:17:00.000Z | 2022-03-31T21:43:25.000Z | from functools import wraps
def int_property_decorator(func):
@property
@wraps(func)
def wrapper(*args):
value = func(*args)
try:
return int(value)
except (TypeError, ValueError):
# If there is no value, default to None. None is statistically
# different from 0 as a player/team who played an entire game and
# contributed nothing is different from one who didn't play at all.
# This enables flexibility for end-users to decide whether they
# want to fill the empty value with any specific number (such as 0
# or an average/median for the category) or keep it empty depending
# on their use-case.
return None
return wrapper
def float_property_decorator(func):
@property
@wraps(func)
def wrapper(*args):
value = func(*args)
try:
return float(value)
except (TypeError, ValueError):
# If there is no value, default to None. None is statistically
# different from 0 as a player/team who played an entire game and
# contributed nothing is different from one who didn't play at all.
# This enables flexibility for end-users to decide whether they
# want to fill the empty value with any specific number (such as 0
# or an average/median for the category) or keep it empty depending
# on their use-case.
return None
return wrapper
| 38.1 | 79 | 0.622047 | 204 | 1,524 | 4.627451 | 0.367647 | 0.055085 | 0.044492 | 0.061441 | 0.951271 | 0.951271 | 0.951271 | 0.951271 | 0.951271 | 0.951271 | 0 | 0.003891 | 0.325459 | 1,524 | 39 | 80 | 39.076923 | 0.914397 | 0.528215 | 0 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.047619 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
867b65eee5a8eee939dd4447711e01d7b5773acd | 4,225 | py | Python | Cogs/PciUsb.py | cheesycod/CorpBot.py | 61af17bac6ff00c5eeaedc97931b62c5d3f02fcf | [
"MIT"
] | 368 | 2016-10-17T21:21:12.000Z | 2022-03-18T09:22:56.000Z | Cogs/PciUsb.py | cheesycod/CorpBot.py | 61af17bac6ff00c5eeaedc97931b62c5d3f02fcf | [
"MIT"
] | 60 | 2017-01-01T01:35:10.000Z | 2022-01-19T18:43:00.000Z | Cogs/PciUsb.py | cheesycod/CorpBot.py | 61af17bac6ff00c5eeaedc97931b62c5d3f02fcf | [
"MIT"
] | 189 | 2016-10-10T20:38:11.000Z | 2022-03-26T12:23:49.000Z | import discord
from discord.ext import commands
from Cogs import DL
from Cogs import Message
def setup(bot):
# Add the bot
bot.add_cog(PciUsb(bot))
class PciUsb(commands.Cog):
# Init with the bot reference, and a reference to the settings var
def __init__(self, bot):
self.bot = bot
@commands.command(pass_context=True)
async def pci(self, ctx, ven_dev = None):
"""Searches pci-ids.ucw.cz for the passed PCI ven:dev id."""
if not ven_dev:
await ctx.send("Usage: `{}pci vvvv:dddd` where `vvvv` is the vendor id, and `dddd` is the device id.".format(ctx.prefix))
return
try:
v,i = ven_dev.split(":")
except:
await ctx.send("Usage: `{}pci vvvv:dddd` where `vvvv` is the vendor id, and `dddd` is the device id.".format(ctx.prefix))
return
if not (len(v)==len(i)==4):
await ctx.send("Usage: `{}pci vvvv:dddd` where `vvvv` is the vendor id, and `dddd` is the device id.".format(ctx.prefix))
return
if not v.isalnum() and not i.isalnum():
await ctx.send("Ven and dev ids must be alphanumeric.")
return
url = "http://pci-ids.ucw.cz/read/PC/{}".format(v)
try:
html = await DL.async_text(url)
except:
await ctx.send("No data returned.")
return
vendor = None
for line in html.split("\n"):
if '<div class="name">' in line:
try:
vendor = line.split("Name: ")[1].split("<")[0].replace("&","&").replace(""",'"').replace("'","'").replace(">",">").replace("<","<")
break
except:
pass
vendor = v if not vendor else vendor
url = "http://pci-ids.ucw.cz/read/PC/{}/{}".format(v,i)
try:
html = await DL.async_text(url)
except:
await ctx.send("No data returned.")
return
out = ""
for line in html.split("\n"):
if "itemname" in line.lower():
out += "Name: ".join(line.split("Name: ")[1:]).replace("&","&").replace(""",'"').replace("'","'").replace(">",">").replace("<","<")
out += "\n"
if not len(out):
await ctx.send("No name found.")
return
# Got data
await Message.EmbedText(description="`{}`\n\n{}".format(ven_dev,out),title="{} PCI Device Results".format(vendor),footer="Powered by http://pci-ids.ucw.cz",color=ctx.author).send(ctx)
@commands.command(pass_context=True)
async def usb(self, ctx, ven_dev = None):
"""Searches usb-ids.gowdy.us for the passed USB ven:dev id."""
if not ven_dev:
await ctx.send("Usage: `{}usb vvvv:dddd` where `vvvv` is the vendor id, and `dddd` is the device id.".format(ctx.prefix))
return
try:
v,i = ven_dev.split(":")
except:
await ctx.send("Usage: `{}usb vvvv:dddd` where `vvvv` is the vendor id, and `dddd` is the device id.".format(ctx.prefix))
return
if not (len(v)==len(i)==4):
await ctx.send("Usage: `{}usb vvvv:dddd` where `vvvv` is the vendor id, and `dddd` is the device id.".format(ctx.prefix))
return
if not v.isalnum() and not i.isalnum():
await ctx.send("Ven and dev ids must be alphanumeric.")
return
url = "https://usb-ids.gowdy.us/read/UD/{}".format(v)
try:
html = await DL.async_text(url)
except:
await ctx.send("No data returned.")
return
vendor = None
for line in html.split("\n"):
if '<div class="name">' in line:
try:
vendor = line.split("Name: ")[1].split("<")[0].replace("&","&").replace(""",'"').replace("'","'").replace(">",">").replace("<","<")
break
except:
pass
vendor = v if not vendor else vendor
url = "https://usb-ids.gowdy.us/read/UD/{}/{}".format(v,i)
try:
html = await DL.async_text(url)
except:
await ctx.send("No data returned.")
return
out = ""
for line in html.split("\n"):
if "itemname" in line.lower():
out += "Name: ".join(line.split("Name: ")[1:]).replace("&","&").replace(""",'"').replace("'","'").replace(">",">").replace("<","<")
out += "\n"
if not len(out):
await ctx.send("No name found.")
return
# Got data
await Message.EmbedText(description="`{}`\n\n{}".format(ven_dev,out),title="{} USB Device Results".format(vendor),footer="Powered by https://usb-ids.gowdy.us",color=ctx.author).send(ctx)
| 37.389381 | 189 | 0.606391 | 643 | 4,225 | 3.954899 | 0.172628 | 0.044042 | 0.066064 | 0.04011 | 0.889894 | 0.860401 | 0.840739 | 0.779394 | 0.779394 | 0.779394 | 0 | 0.002338 | 0.190059 | 4,225 | 112 | 190 | 37.723214 | 0.740795 | 0.022249 | 0 | 0.831683 | 0 | 0.059406 | 0.299846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019802 | false | 0.039604 | 0.039604 | 0 | 0.207921 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
86a234707deeba151184acddf2e69875130998f2 | 25,488 | py | Python | ckanext-showcase/ckanext/showcase/tests/test_auth.py | smallmedia/iod-ckan | dfd85b41286fe86924ec16b0a88efc7292848ceb | [
"Apache-2.0"
] | 4 | 2017-06-12T15:18:30.000Z | 2019-10-11T15:12:43.000Z | ckanext-showcase/ckanext/showcase/tests/test_auth.py | smallmedia/iod-ckan | dfd85b41286fe86924ec16b0a88efc7292848ceb | [
"Apache-2.0"
] | 64 | 2017-05-14T22:15:53.000Z | 2020-03-08T15:26:49.000Z | ckanext-showcase/ckanext/showcase/tests/test_auth.py | smallmedia/iod-ckan | dfd85b41286fe86924ec16b0a88efc7292848ceb | [
"Apache-2.0"
] | 2 | 2018-09-08T08:02:25.000Z | 2020-04-24T13:02:06.000Z | import json
from nose import tools as nosetools
import ckan.plugins.toolkit as toolkit
try:
import ckan.tests.factories as factories
except ImportError: # for ckan <= 2.3
import ckan.new_tests.factories as factories
try:
import ckan.tests.helpers as helpers
except ImportError: # for ckan <= 2.3
import ckan.new_tests.helpers as helpers
from ckanext.showcase.tests import ShowcaseFunctionalTestBase
class TestShowcaseAuthIndex(ShowcaseFunctionalTestBase):
def test_auth_anon_user_can_view_showcase_index(self):
'''An anon (not logged in) user can view the Showcases index.'''
app = self._get_test_app()
app.get("/showcase", status=200)
def test_auth_logged_in_user_can_view_showcase_index(self):
'''
A logged in user can view the Showcase index.
'''
app = self._get_test_app()
user = factories.User()
app.get("/showcase", status=200,
extra_environ={'REMOTE_USER': str(user["name"])})
def test_auth_anon_user_cant_see_add_showcase_button(self):
'''
An anon (not logged in) user can't see the Add Showcase button on the
showcase index page.
'''
app = self._get_test_app()
response = app.get("/showcase", status=200)
# test for new showcase link in response
response.mustcontain(no="/showcase/new")
def test_auth_logged_in_user_cant_see_add_showcase_button(self):
'''
A logged in user can't see the Add Showcase button on the showcase
index page.
'''
app = self._get_test_app()
user = factories.User()
response = app.get("/showcase", status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
# test for new showcase link in response
response.mustcontain(no="/showcase/new")
def test_auth_sysadmin_can_see_add_showcase_button(self):
'''
A sysadmin can see the Add Showcase button on the showcase index
page.
'''
app = self._get_test_app()
user = factories.Sysadmin()
response = app.get("/showcase", status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
# test for new showcase link in response
response.mustcontain("/showcase/new")
class TestShowcaseAuthDetails(ShowcaseFunctionalTestBase):
def test_auth_anon_user_can_view_showcase_details(self):
'''
An anon (not logged in) user can view an individual Showcase details page.
'''
app = self._get_test_app()
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/my-showcase', status=200)
def test_auth_logged_in_user_can_view_showcase_details(self):
'''
A logged in user can view an individual Showcase details page.
'''
app = self._get_test_app()
user = factories.User()
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/my-showcase', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_anon_user_cant_see_manage_button(self):
'''
An anon (not logged in) user can't see the Manage button on an individual
showcase details page.
'''
app = self._get_test_app()
factories.Dataset(type='showcase', name='my-showcase')
response = app.get('/showcase/my-showcase', status=200)
# test for url to edit page
response.mustcontain(no="/showcase/edit/my-showcase")
def test_auth_logged_in_user_can_see_manage_button(self):
'''
A logged in user can't see the Manage button on an individual showcase
details page.
'''
app = self._get_test_app()
user = factories.User()
factories.Dataset(type='showcase', name='my-showcase')
response = app.get('/showcase/my-showcase', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
# test for url to edit page
response.mustcontain(no="/showcase/edit/my-showcase")
def test_auth_sysadmin_can_see_manage_button(self):
'''
A sysadmin can see the Manage button on an individual showcase details
page.
'''
app = self._get_test_app()
user = factories.Sysadmin()
factories.Dataset(type='showcase', name='my-showcase')
response = app.get('/showcase/my-showcase', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
# test for url to edit page
response.mustcontain("/showcase/edit/my-showcase")
def test_auth_showcase_show_anon_can_access(self):
'''
Anon user can request showcase show.
'''
app = self._get_test_app()
factories.Dataset(type='showcase', name='my-showcase')
response = app.get('/api/3/action/ckanext_showcase_show?id=my-showcase',
status=200)
json_response = json.loads(response.body)
nosetools.assert_true(json_response['success'])
def test_auth_showcase_show_normal_user_can_access(self):
'''
Normal logged in user can request showcase show.
'''
user = factories.User()
app = self._get_test_app()
factories.Dataset(type='showcase', name='my-showcase')
response = app.get('/api/3/action/ckanext_showcase_show?id=my-showcase',
status=200, extra_environ={'REMOTE_USER': str(user['name'])})
json_response = json.loads(response.body)
nosetools.assert_true(json_response['success'])
def test_auth_showcase_show_sysadmin_can_access(self):
'''
Normal logged in user can request showcase show.
'''
user = factories.Sysadmin()
app = self._get_test_app()
factories.Dataset(type='showcase', name='my-showcase')
response = app.get('/api/3/action/ckanext_showcase_show?id=my-showcase',
status=200, extra_environ={'REMOTE_USER': str(user['name'])})
json_response = json.loads(response.body)
nosetools.assert_true(json_response['success'])
class TestShowcaseAuthCreate(ShowcaseFunctionalTestBase):
def test_auth_anon_user_cant_view_create_showcase(self):
'''
An anon (not logged in) user can't access the create showcase page.
'''
app = self._get_test_app()
app.get("/showcase/new", status=302)
def test_auth_logged_in_user_cant_view_create_showcase_page(self):
'''
A logged in user can't access the create showcase page.
'''
app = self._get_test_app()
user = factories.User()
app.get("/showcase/new", status=401,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_sysadmin_can_view_create_showcase_page(self):
'''
A sysadmin can access the create showcase page.
'''
app = self._get_test_app()
user = factories.Sysadmin()
app.get("/showcase/new", status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
class TestShowcaseAuthList(ShowcaseFunctionalTestBase):
def test_auth_showcase_list_anon_can_access(self):
'''
Anon user can request showcase list.
'''
app = self._get_test_app()
factories.Dataset(type='showcase', name='my-showcase')
response = app.get('/api/3/action/ckanext_showcase_list',
status=200)
json_response = json.loads(response.body)
nosetools.assert_true(json_response['success'])
def test_auth_showcase_list_normal_user_can_access(self):
'''
Normal logged in user can request showcase list.
'''
user = factories.User()
app = self._get_test_app()
factories.Dataset(type='showcase', name='my-showcase')
response = app.get('/api/3/action/ckanext_showcase_list',
status=200, extra_environ={'REMOTE_USER': str(user['name'])})
json_response = json.loads(response.body)
nosetools.assert_true(json_response['success'])
def test_auth_showcase_list_sysadmin_can_access(self):
'''
Normal logged in user can request showcase list.
'''
user = factories.Sysadmin()
app = self._get_test_app()
factories.Dataset(type='showcase', name='my-showcase')
response = app.get('/api/3/action/ckanext_showcase_list',
status=200, extra_environ={'REMOTE_USER': str(user['name'])})
json_response = json.loads(response.body)
nosetools.assert_true(json_response['success'])
class TestShowcaseAuthEdit(ShowcaseFunctionalTestBase):
def test_auth_anon_user_cant_view_edit_showcase_page(self):
'''
An anon (not logged in) user can't access the showcase edit page.
'''
app = self._get_test_app()
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/edit/my-showcase', status=302)
def test_auth_logged_in_user_cant_view_edit_showcase_page(self):
'''
A logged in user can't access the showcase edit page.
'''
app = self._get_test_app()
user = factories.User()
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/edit/my-showcase', status=401,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_sysadmin_can_view_edit_showcase_page(self):
'''
A sysadmin can access the showcase edit page.
'''
app = self._get_test_app()
user = factories.Sysadmin()
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/edit/my-showcase', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_showcase_admin_can_view_edit_showcase_page(self):
'''
A showcase admin can access the showcase edit page.
'''
app = self._get_test_app()
user = factories.User()
# Make user a showcase admin
helpers.call_action('ckanext_showcase_admin_add', context={},
username=user['name'])
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/edit/my-showcase', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_anon_user_cant_view_manage_datasets(self):
'''
An anon (not logged in) user can't access the showcase manage datasets page.
'''
app = self._get_test_app()
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/manage_datasets/my-showcase', status=302)
def test_auth_logged_in_user_cant_view_manage_datasets(self):
'''
A logged in user (not sysadmin) can't access the showcase manage datasets page.
'''
app = self._get_test_app()
user = factories.User()
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/manage_datasets/my-showcase', status=401,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_sysadmin_can_view_manage_datasets(self):
'''
A sysadmin can access the showcase manage datasets page.
'''
app = self._get_test_app()
user = factories.Sysadmin()
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/manage_datasets/my-showcase', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_showcase_admin_can_view_manage_datasets(self):
'''
A showcase admin can access the showcase manage datasets page.
'''
app = self._get_test_app()
user = factories.User()
# Make user a showcase admin
helpers.call_action('ckanext_showcase_admin_add', context={},
username=user['name'])
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/manage_datasets/my-showcase', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_anon_user_cant_view_delete_showcase_page(self):
'''
An anon (not logged in) user can't access the showcase delete page.
'''
app = self._get_test_app()
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/delete/my-showcase', status=302)
def test_auth_logged_in_user_cant_view_delete_showcase_page(self):
'''
A logged in user can't access the showcase delete page.
'''
app = self._get_test_app()
user = factories.User()
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/delete/my-showcase', status=401,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_sysadmin_can_view_delete_showcase_page(self):
'''
A sysadmin can access the showcase delete page.
'''
app = self._get_test_app()
user = factories.Sysadmin()
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/delete/my-showcase', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_showcase_admin_can_view_delete_showcase_page(self):
'''
A showcase admin can access the showcase delete page.
'''
app = self._get_test_app()
user = factories.User()
# Make user a showcase admin
helpers.call_action('ckanext_showcase_admin_add', context={},
username=user['name'])
factories.Dataset(type='showcase', name='my-showcase')
app.get('/showcase/delete/my-showcase', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_anon_user_cant_view_addtoshowcase_dropdown_dataset_showcase_list(self):
'''
An anonymous user can't view the 'Add to showcase' dropdown selector
from a datasets showcase list page.
'''
app = self._get_test_app()
factories.Dataset(name='my-showcase', type='showcase')
factories.Dataset(name='my-dataset')
showcase_list_response = app.get('/dataset/showcases/my-dataset', status=200)
nosetools.assert_false('showcase-add' in showcase_list_response.forms)
def test_auth_normal_user_cant_view_addtoshowcase_dropdown_dataset_showcase_list(self):
'''
A normal (logged in) user can't view the 'Add to showcase' dropdown
selector from a datasets showcase list page.
'''
user = factories.User()
app = self._get_test_app()
factories.Dataset(name='my-showcase', type='showcase')
factories.Dataset(name='my-dataset')
showcase_list_response = app.get('/dataset/showcases/my-dataset', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
nosetools.assert_false('showcase-add' in showcase_list_response.forms)
def test_auth_sysadmin_can_view_addtoshowcase_dropdown_dataset_showcase_list(self):
'''
A sysadmin can view the 'Add to showcase' dropdown selector from a
datasets showcase list page.
'''
user = factories.Sysadmin()
app = self._get_test_app()
factories.Dataset(name='my-showcase', type='showcase')
factories.Dataset(name='my-dataset')
showcase_list_response = app.get('/dataset/showcases/my-dataset', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
nosetools.assert_true('showcase-add' in showcase_list_response.forms)
def test_auth_showcase_admin_can_view_addtoshowcase_dropdown_dataset_showcase_list(self):
'''
A showcase admin can view the 'Add to showcase' dropdown selector from
a datasets showcase list page.
'''
app = self._get_test_app()
user = factories.User()
# Make user a showcase admin
helpers.call_action('ckanext_showcase_admin_add', context={},
username=user['name'])
factories.Dataset(name='my-showcase', type='showcase')
factories.Dataset(name='my-dataset')
showcase_list_response = app.get('/dataset/showcases/my-dataset', status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
nosetools.assert_true('showcase-add' in showcase_list_response.forms)
class TestShowcasePackageAssociationCreate(ShowcaseFunctionalTestBase):
def test_showcase_package_association_create_no_user(self):
'''
Calling showcase package association create with no user raises
NotAuthorized.
'''
context = {'user': None, 'model': None}
nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
'ckanext_showcase_package_association_create',
context=context)
def test_showcase_package_association_create_sysadmin(self):
'''
Calling showcase package association create by a sysadmin doesn't
raise NotAuthorized.
'''
a_sysadmin = factories.Sysadmin()
context = {'user': a_sysadmin['name'], 'model': None}
helpers.call_auth('ckanext_showcase_package_association_create',
context=context)
def test_showcase_package_association_create_showcase_admin(self):
'''
Calling showcase package association create by a showcase admin
doesn't raise NotAuthorized.
'''
showcase_admin = factories.User()
# Make user a showcase admin
helpers.call_action('ckanext_showcase_admin_add', context={},
username=showcase_admin['name'])
context = {'user': showcase_admin['name'], 'model': None}
helpers.call_auth('ckanext_showcase_package_association_create',
context=context)
def test_showcase_package_association_create_unauthorized_creds(self):
'''
Calling showcase package association create with unauthorized user
raises NotAuthorized.
'''
not_a_sysadmin = factories.User()
context = {'user': not_a_sysadmin['name'], 'model': None}
nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
'ckanext_showcase_package_association_create',
context=context)
class TestShowcasePackageAssociationDelete(ShowcaseFunctionalTestBase):
def test_showcase_package_association_delete_no_user(self):
'''
Calling showcase package association create with no user raises
NotAuthorized.
'''
context = {'user': None, 'model': None}
nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
'ckanext_showcase_package_association_delete',
context=context)
def test_showcase_package_association_delete_sysadmin(self):
'''
Calling showcase package association create by a sysadmin doesn't
raise NotAuthorized.
'''
a_sysadmin = factories.Sysadmin()
context = {'user': a_sysadmin['name'], 'model': None}
helpers.call_auth('ckanext_showcase_package_association_delete',
context=context)
def test_showcase_package_association_delete_showcase_admin(self):
'''
Calling showcase package association create by a showcase admin
doesn't raise NotAuthorized.
'''
showcase_admin = factories.User()
# Make user a showcase admin
helpers.call_action('ckanext_showcase_admin_add', context={},
username=showcase_admin['name'])
context = {'user': showcase_admin['name'], 'model': None}
helpers.call_auth('ckanext_showcase_package_association_delete',
context=context)
def test_showcase_package_association_delete_unauthorized_creds(self):
'''
Calling showcase package association create with unauthorized user
raises NotAuthorized.
'''
not_a_sysadmin = factories.User()
context = {'user': not_a_sysadmin['name'], 'model': None}
nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
'ckanext_showcase_package_association_delete',
context=context)
class TestShowcaseAdminAddAuth(ShowcaseFunctionalTestBase):
def test_showcase_admin_add_no_user(self):
'''
Calling showcase admin add with no user raises NotAuthorized.
'''
context = {'user': None, 'model': None}
nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
'ckanext_showcase_admin_add', context=context)
def test_showcase_admin_add_correct_creds(self):
'''
Calling showcase admin add by a sysadmin doesn't raise
NotAuthorized.
'''
a_sysadmin = factories.Sysadmin()
context = {'user': a_sysadmin['name'], 'model': None}
helpers.call_auth('ckanext_showcase_admin_add', context=context)
def test_showcase_admin_add_unauthorized_creds(self):
'''
Calling showcase admin add with unauthorized user raises
NotAuthorized.
'''
not_a_sysadmin = factories.User()
context = {'user': not_a_sysadmin['name'], 'model': None}
nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
'ckanext_showcase_admin_add', context=context)
class TestShowcaseAdminRemoveAuth(ShowcaseFunctionalTestBase):
def test_showcase_admin_remove_no_user(self):
'''
Calling showcase admin remove with no user raises NotAuthorized.
'''
context = {'user': None, 'model': None}
nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
'ckanext_showcase_admin_remove', context=context)
def test_showcase_admin_remove_correct_creds(self):
'''
Calling showcase admin remove by a sysadmin doesn't raise
NotAuthorized.
'''
a_sysadmin = factories.Sysadmin()
context = {'user': a_sysadmin['name'], 'model': None}
helpers.call_auth('ckanext_showcase_admin_remove', context=context)
def test_showcase_admin_remove_unauthorized_creds(self):
'''
Calling showcase admin remove with unauthorized user raises
NotAuthorized.
'''
not_a_sysadmin = factories.User()
context = {'user': not_a_sysadmin['name'], 'model': None}
nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
'ckanext_showcase_admin_remove', context=context)
class TestShowcaseAdminListAuth(ShowcaseFunctionalTestBase):
def test_showcase_admin_list_no_user(self):
'''
Calling showcase admin list with no user raises NotAuthorized.
'''
context = {'user': None, 'model': None}
nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
'ckanext_showcase_admin_list', context=context)
def test_showcase_admin_list_correct_creds(self):
'''
Calling showcase admin list by a sysadmin doesn't raise
NotAuthorized.
'''
a_sysadmin = factories.Sysadmin()
context = {'user': a_sysadmin['name'], 'model': None}
helpers.call_auth('ckanext_showcase_admin_list', context=context)
def test_showcase_admin_list_unauthorized_creds(self):
'''
Calling showcase admin list with unauthorized user raises
NotAuthorized.
'''
not_a_sysadmin = factories.User()
context = {'user': not_a_sysadmin['name'], 'model': None}
nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
'ckanext_showcase_admin_list', context=context)
class TestShowcaseAuthManageShowcaseAdmins(ShowcaseFunctionalTestBase):
def test_auth_anon_user_cant_view_showcase_admin_manage_page(self):
'''
An anon (not logged in) user can't access the manage showcase admin
page.
'''
app = self._get_test_app()
app.get("/showcase/new", status=302)
def test_auth_logged_in_user_cant_view_showcase_admin_manage_page(self):
'''
A logged in user can't access the manage showcase admin page.
'''
app = self._get_test_app()
user = factories.User()
app.get("/showcase/new", status=401,
extra_environ={'REMOTE_USER': str(user['name'])})
def test_auth_sysadmin_can_view_showcase_admin_manage_page(self):
'''
A sysadmin can access the manage showcase admin page.
'''
app = self._get_test_app()
user = factories.Sysadmin()
app.get("/showcase/new", status=200,
extra_environ={'REMOTE_USER': str(user['name'])})
| 35.797753 | 93 | 0.634181 | 2,910 | 25,488 | 5.295876 | 0.042955 | 0.053144 | 0.027123 | 0.034521 | 0.956654 | 0.940562 | 0.895464 | 0.878139 | 0.860814 | 0.827266 | 0 | 0.006584 | 0.261103 | 25,488 | 711 | 94 | 35.848101 | 0.811713 | 0.165607 | 0 | 0.724036 | 0 | 0 | 0.15571 | 0.084015 | 0 | 0 | 0 | 0 | 0.059347 | 1 | 0.163205 | false | 0 | 0.029674 | 0 | 0.225519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
86a5f86f37460346a2460ee58a1aaeb1d7d5a136 | 5,168 | py | Python | webdriver/tests/cookies/add_cookie.py | hayatoito/web-platform-tests | 9c14bbe2e5b58d27be06bf643e5e22086f1c42ec | [
"BSD-3-Clause"
] | null | null | null | webdriver/tests/cookies/add_cookie.py | hayatoito/web-platform-tests | 9c14bbe2e5b58d27be06bf643e5e22086f1c42ec | [
"BSD-3-Clause"
] | null | null | null | webdriver/tests/cookies/add_cookie.py | hayatoito/web-platform-tests | 9c14bbe2e5b58d27be06bf643e5e22086f1c42ec | [
"BSD-3-Clause"
] | null | null | null | from tests.support.fixtures import clear_all_cookies
from tests.support.fixtures import server_config
from datetime import datetime, timedelta
def test_add_domain_cookie(session, url):
session.url = url("/common/blank.html")
clear_all_cookies(session)
create_cookie_request = {
"cookie": {
"name": "hello",
"value": "world",
"domain": "web-platform.test",
"path": "/",
"httpOnly": False,
"secure": False
}
}
result = session.transport.send("POST", "session/%s/cookie" % session.session_id, create_cookie_request)
assert result.status == 200
assert "value" in result.body
assert isinstance(result.body["value"], dict)
result = session.transport.send("GET", "session/%s/cookie" % session.session_id)
assert result.status == 200
assert "value" in result.body
assert isinstance(result.body["value"], list)
assert len(result.body["value"]) == 1
assert isinstance(result.body["value"][0], dict)
cookie = result.body["value"][0]
assert "name" in cookie
assert isinstance(cookie["name"], basestring)
assert "value" in cookie
assert isinstance(cookie["value"], basestring)
assert "domain" in cookie
assert isinstance(cookie["domain"], basestring)
assert cookie["name"] == "hello"
assert cookie["value"] == "world"
assert cookie["domain"] == ".web-platform.test"
def test_add_cookie_for_ip(session, url, server_config):
session.url = "http://127.0.0.1:%s/404" % (server_config["ports"]["http"][0])
clear_all_cookies(session)
create_cookie_request = {
"cookie": {
"name": "hello",
"value": "world",
"domain": "127.0.0.1",
"path": "/",
"httpOnly": False,
"secure": False
}
}
result = session.transport.send("POST", "session/%s/cookie" % session.session_id, create_cookie_request)
assert result.status == 200
assert "value" in result.body
assert isinstance(result.body["value"], dict)
result = session.transport.send("GET", "session/%s/cookie" % session.session_id)
assert result.status == 200
assert "value" in result.body
assert isinstance(result.body["value"], list)
assert len(result.body["value"]) == 1
assert isinstance(result.body["value"][0], dict)
cookie = result.body["value"][0]
assert "name" in cookie
assert isinstance(cookie["name"], basestring)
assert "value" in cookie
assert isinstance(cookie["value"], basestring)
assert "domain" in cookie
assert isinstance(cookie["domain"], basestring)
assert cookie["name"] == "hello"
assert cookie["value"] == "world"
assert cookie["domain"] == "127.0.0.1"
def test_add_non_session_cookie(session, url):
session.url = url("/common/blank.html")
clear_all_cookies(session)
a_year_from_now = int((datetime.utcnow() + timedelta(days=365)).strftime("%s"))
create_cookie_request = {
"cookie": {
"name": "hello",
"value": "world",
"expiry": a_year_from_now
}
}
result = session.transport.send("POST", "session/%s/cookie" % session.session_id, create_cookie_request)
assert result.status == 200
assert "value" in result.body
assert isinstance(result.body["value"], dict)
result = session.transport.send("GET", "session/%s/cookie" % session.session_id)
assert result.status == 200
assert "value" in result.body
assert isinstance(result.body["value"], list)
assert len(result.body["value"]) == 1
assert isinstance(result.body["value"][0], dict)
cookie = result.body["value"][0]
assert "name" in cookie
assert isinstance(cookie["name"], basestring)
assert "value" in cookie
assert isinstance(cookie["value"], basestring)
assert "expiry" in cookie
assert isinstance(cookie["expiry"], int)
assert cookie["name"] == "hello"
assert cookie["value"] == "world"
assert cookie["expiry"] == a_year_from_now
def test_add_session_cookie(session, url):
session.url = url("/common/blank.html")
clear_all_cookies(session)
create_cookie_request = {
"cookie": {
"name": "hello",
"value": "world"
}
}
result = session.transport.send("POST", "session/%s/cookie" % session.session_id, create_cookie_request)
assert result.status == 200
assert "value" in result.body
assert isinstance(result.body["value"], dict)
result = session.transport.send("GET", "session/%s/cookie" % session.session_id)
assert result.status == 200
assert "value" in result.body
assert isinstance(result.body["value"], list)
assert len(result.body["value"]) == 1
assert isinstance(result.body["value"][0], dict)
cookie = result.body["value"][0]
assert "name" in cookie
assert isinstance(cookie["name"], basestring)
assert "value" in cookie
assert isinstance(cookie["value"], basestring)
assert "expiry" in cookie
assert cookie.get("expiry") is None
assert cookie["name"] == "hello"
assert cookie["value"] == "world"
| 35.641379 | 108 | 0.64048 | 621 | 5,168 | 5.233494 | 0.114332 | 0.086154 | 0.092308 | 0.096 | 0.906462 | 0.868308 | 0.868308 | 0.868308 | 0.841538 | 0.841538 | 0 | 0.014951 | 0.210526 | 5,168 | 144 | 109 | 35.888889 | 0.781618 | 0 | 0 | 0.796875 | 0 | 0 | 0.157701 | 0 | 0 | 0 | 0 | 0 | 0.523438 | 1 | 0.03125 | false | 0 | 0.023438 | 0 | 0.054688 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
86aa2f21c9c7c74793af9b294b2257a1a1b756b3 | 11,727 | py | Python | service/ut/base_ut.py | NASA-PDS/registry-api | 31956b47e366a20f280889bbad043fedade486a1 | [
"Apache-2.0"
] | null | null | null | service/ut/base_ut.py | NASA-PDS/registry-api | 31956b47e366a20f280889bbad043fedade486a1 | [
"Apache-2.0"
] | 61 | 2022-01-05T15:41:53.000Z | 2022-03-31T21:36:57.000Z | service/ut/base_ut.py | NASA-PDS/registry-api | 31956b47e366a20f280889bbad043fedade486a1 | [
"Apache-2.0"
] | null | null | null |
import helpers
import unittest
def test_bad_lidvid():
ep = '/bundles/notreal'
status,data = helpers.fetch_kvp_json (helpers.make_url (ep))
assert 404 == status
assert 'message' in data
assert 'request' in data
assert 'The lidvid notreal was not found' == data['message']
assert data['request'] == ep
return
class TestBundles(unittest.TestCase):
def test_bundles(self):
status,resp = helpers.fetch_kvp_json (helpers.make_url ('/bundles'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (1, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return resp['data'][0]['lidvid']
def test_lidvid(self):
lidvid = self.test_bundles()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/bundles/' + lidvid))
self.assertEqual (200, status)
self.assertIn ('lidvid', resp)
self.assertEqual (lidvid, resp['lidvid'])
return
def test_lidvid_latest(self):
lidvid = self.test_bundles()
status,resp = helpers.fetch_kvp_json(helpers.make_url
('/bundles/' + lidvid + '/latest'))
self.assertEqual (200, status)
self.assertIn ('lidvid', resp)
self.assertEqual (lidvid, resp['lidvid'])
return
def test_lidvid_all(self):
lidvid = self.test_bundles()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/bundles/' + lidvid + '/all'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (1, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
self.assertEqual (lidvid, resp['data'][0]['lidvid'])
return
def test_collections(self):
lidvid = self.test_bundles()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/bundles/' + lidvid + '/collections'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (3, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return
def test_collections_latest(self):
lidvid = self.test_bundles()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/bundles/' + lidvid + '/collections/latest'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (3, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return
def test_collections_all(self):
lidvid = self.test_bundles()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/bundles/' + lidvid + '/collections/all'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (3, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return
def test_products(self):
lidvid = self.test_bundles()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/bundles/' + lidvid + '/products'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (21, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return
pass
class TestCollections(unittest.TestCase):
def test_bundles(self):
lidvid = self.test_collections()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/collections/' + lidvid + '/bundles'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (1, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return
def test_collections(self):
status,resp = helpers.fetch_kvp_json (helpers.make_url ('/collections'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (3, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return resp['data'][0]['lidvid']
def test_lidvid(self):
lidvid = self.test_collections()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/collections/' + lidvid))
self.assertEqual (200, status)
self.assertIn ('lidvid', resp)
self.assertEqual (lidvid, resp['lidvid'])
return
def test_lidvid_latest(self):
lidvid = self.test_collections()
status,resp = helpers.fetch_kvp_json(helpers.make_url
('/collections/' + lidvid + '/latest'))
self.assertEqual (200, status)
self.assertIn ('lidvid', resp)
self.assertEqual (lidvid, resp['lidvid'])
return
def test_lidvid_all(self):
lidvid = self.test_collections()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/collections/' + lidvid + '/all'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (1, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
self.assertEqual (lidvid, resp['data'][0]['lidvid'])
return
def test_products(self):
lidvid = self.test_collections()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/collections/' + lidvid + '/products'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (7, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return
def test_products_latest(self):
lidvid = self.test_collections()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/collections/' + lidvid + '/products/latest'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (7, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return
def test_products_all(self):
lidvid = self.test_collections()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/collections/' + lidvid + '/products/all'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (7, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return
pass
class TestProducts(unittest.TestCase):
def test_products(self):
status,resp = helpers.fetch_kvp_json (helpers.make_url ('/products'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (25, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return resp['data'][-1]['lidvid']
def test_lidvid(self):
lidvid = self.test_products()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/products/' + lidvid))
self.assertEqual (200, status)
self.assertIn ('lidvid', resp)
self.assertEqual (lidvid, resp['lidvid'])
return
def test_lidvid_latest(self):
lidvid = self.test_products()
status,resp = helpers.fetch_kvp_json(helpers.make_url
('/products/' + lidvid + '/latest'))
self.assertEqual (200, status)
self.assertIn ('lidvid', resp)
self.assertEqual (lidvid, resp['lidvid'])
return
def test_lidvid_all(self):
lidvid = self.test_products()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/products/' + lidvid + '/all'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (1, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
self.assertEqual (lidvid, resp['data'][0]['lidvid'])
return
def test_collections(self):
lidvid = self.test_products()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/products/' + lidvid + '/collections'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (1, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return
def test_bundles(self):
lidvid = self.test_products()
status,resp = helpers.fetch_kvp_json (helpers.make_url
('/products/' + lidvid + '/bundles'))
self.assertEqual (200, status)
self.assertIn ('summary', resp)
self.assertIn ('hits', resp['summary'])
self.assertEqual (1, resp['summary']['hits'])
self.assertIn ('data', resp)
self.assertEqual (resp['summary']['hits'], len(resp['data']))
self.assertIn ('lidvid', resp['data'][0])
return
pass
| 42.032258 | 94 | 0.560587 | 1,219 | 11,727 | 5.292863 | 0.041838 | 0.130192 | 0.074396 | 0.067731 | 0.957843 | 0.957843 | 0.950713 | 0.942343 | 0.938159 | 0.938159 | 0 | 0.013006 | 0.285324 | 11,727 | 278 | 95 | 42.183453 | 0.756831 | 0 | 0 | 0.834646 | 0 | 0 | 0.129968 | 0 | 0 | 0 | 0 | 0 | 0.543307 | 1 | 0.090551 | false | 0.011811 | 0.007874 | 0 | 0.200787 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
86c2ad96d91d8c6ff592ae8a5487629eb524486f | 22,467 | py | Python | enrich/merge.py | sonar-idh/Transformer | bbfde9d19ad4a4917257483ba66eb30a8294244d | [
"MIT"
] | null | null | null | enrich/merge.py | sonar-idh/Transformer | bbfde9d19ad4a4917257483ba66eb30a8294244d | [
"MIT"
] | null | null | null | enrich/merge.py | sonar-idh/Transformer | bbfde9d19ad4a4917257483ba66eb30a8294244d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@author: melina
"""
from os import listdir
from os.path import isfile, join
import fire
def merge_ocr_files(outfile, ocr_data_path):
"""
WORKS ONLY FOR THE FIRST FILE OF THE OCR BATCH3. GND ENTITIES HAVE BEEN ADDED
MANUALLY.
Merges all ocr files into one graphml file.
---------
outfile : str
Name of output file. Needs to end in .graphml .
ocr_data_path : name of directory which contains the
ocr .graphml files
Returns
-----------
None.
"""
with open(outfile, 'w', encoding='utf8') as out:
out.write("""<?xml version="1.0" encoding="UTF-8"?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<key id="id" for="node" attr.name="id" attr.type="string"/>
<key id="IdGND" for="node" attr.name="IdGND" attr.type="string"/>
<key id="IdWikidata" for="node" attr.name="IdWikidata" attr.type="string"/>
<key id="OldId" for="node" attr.name="OldId" attr.type="string"/>
<key id="Uri" for="node" attr.name="Uri" attr.type="string"/>
<key id="GenType" for="node" attr.name="GenType" attr.type="string"/>
<key id="SpecType" for="node" attr.name="SpecType" attr.type="string"/>
<key id="Source" for="node" attr.name="Source" attr.type="string"/>
<key id="Name" for="node" attr.name="Name" attr.type="string"/>
<key id="IdZDB" for="node" attr.name="IdZDB" attr.type="string"/>
<key id="DateApproxBegin" for="node" attr.name="DateApproxBegin" attr.type="string"/>
<key id="DateStrictBegin" for="node" attr.name="DateStrictBegin" attr.type="string"/>
<key id="VariantName" for="node" attr.name="VariantName" attr.type="string"/>
<key id="Gender" for="node" attr.name="Gender" attr.type="string"/>
<key id="DateOriginal" for="node" attr.name="DateOriginal" attr.type="string"/>
<key id="DateApproxOriginal" for="node" attr.name="DateApproxOriginal" attr.type="string"/>
<key id="DateApproxEnd" for="node" attr.name="DateApproxEnd" attr.type="string"/>
<key id="DateStrictOriginal" for="node" attr.name="DateStrictOriginal" attr.type="string"/>
<key id="DateStrictEnd" for="node" attr.name="DateStrictEnd" attr.type="string"/>
<key id="SubUnit" for="node" attr.name="SubUnit" attr.type="string"/>
<key id="Info" for="node" attr.name="Info" attr.type="string"/>
<key id="Place" for="node" attr.name="Place" attr.type="string"/>
<key id="Date" for="node" attr.name="Date" attr.type="string"/>
<key id="Creator" for="node" attr.name="Creator" attr.type="string"/>
<key id="Medium" for="node" attr.name="Medium" attr.type="string"/>
<key id="Lang" for="node" attr.name="Lang" attr.type="string"/>
<key id="GenSubdiv" for="node" attr.name="GenSubdiv" attr.type="string"/>
<key id="GeoArea" for="node" attr.name="GeoArea" attr.type="string"/>
<key id="Coordinates" for="node" attr.name="Coordinates" attr.type="string"/>
<key id="UriGeonames" for="node" attr.name="UriGeonames" attr.type="string"/>
<key id="Title" for="node" attr.name="Title" attr.type="string"/>
<key id="Genre" for="node" attr.name="Genre" attr.type="string"/>
<key id="labels" for="node" attr.name="labels" attr.type="string"/>
<key id="issue" for="node" attr.name="issue" attr.type="string"/>
<key id="page" for="node" attr.name="page" attr.type="string"/>
<key id="article" for="node" attr.name="article" attr.type="string"/>
<key id="version" for="node" attr.name="version" attr.type="string"/>
<key id="url" for="node" attr.name="url" attr.type="string"/>
<key id="id" for="edge" attr.name="id" attr.type="string"/>
<key id="source" for="edge" attr.name="source" attr.type="string"/>
<key id="target" for="edge" attr.name="target" attr.type="string"/>
<key id="label" for="edge" attr.name="label" attr.type="string"/>
<key id="TypeAddInfo" for="edge" attr.name="TypeAddInfo" attr.type="string"/>
<key id="Sent" for="edge" attr.name="Sent" attr.type="string"/>
<key id="Name" for="edge" attr.name="Name" attr.type="string"/>
<key id="Emb" for="edge" attr.name="Emb" attr.type="string"/>
<key id="Left" for="edge" attr.name="Left" attr.type="string"/>
<key id="Top" for="edge" attr.name="Top" attr.type="string"/>
<key id="Width" for="edge" attr.name="Width" attr.type="string"/>
<key id="Height" for="edge" attr.name="Height" attr.type="string"/>
<key id="WdDateApproxBegin" for="node" attr.name="WdDateApproxBegin" attr.type="string"/>
<key id="WdDateStrictBegin" for="node" attr.name="WdDateStrictBegin" attr.type="string"/>
<key id="WdDateApproxOriginal" for="node" attr.name="WdDateApproxOriginal" attr.type="string"/>
<key id="WdDateStrictOriginal" for="node" attr.name="WdDateStrictOriginal" attr.type="string"/>
<key id="WdDateApproxEnd" for="node" attr.name="WdDateApproxEnd" attr.type="string"/>
<key id="WdDateStrictEnd" for="node" attr.name="WdDateStrictEnd" attr.type="string"/>
<key id="WdGender" for="node" attr.name="WdGender" attr.type="string"/>
<key id="WdPlaceOfBirth" for="node" attr.name="WdPlaceOfBirth" attr.type="string"/>
<key id="WdPlaceOfBirthId" for="node" attr.name="WdPlaceOfBirthId" attr.type="string"/>
<key id="WdPlaceOfDeath" for="node" attr.name="WdPlaceOfDeath" attr.type="string"/>
<key id="WdPlaceOfDeathId" for="node" attr.name="WdPlaceOfDeathId" attr.type="string"/>
<key id="SourceType" for="edge" attr.name="SourceType" attr.type="string"/>
<key id="TempValidity" for="edge" attr.name="TempValidity" attr.type="string"/>
<key id="Source" for="edge" attr.name="Source" attr.type="string"/>
<graph id="G" edgedefault="directed">
""")
with open(ocr_data_path +'/'+ 'OCRDocumentNodes.graphml', 'r', encoding='utf8') as file:
for line in file.readlines():
out.write(line)
with open(ocr_data_path +'/'+ 'WikiNodes.graphml', 'r', encoding='utf8') as file:
for line in file.readlines():
out.write(line)
out.write("""<node id="Aut4005728_8" labels=":GeoName"><data key="labels">:GeoName</data><data key="Coordinates">N052.500000 E013.416669</data><data key="GenType">g</data><data key="GeoArea">XA-DE-BE</data><data key="Id">(DE-588)4005728-8</data><data key="IdGeonames">http://sws.geonames.org/2950157</data><data key="Name">Berlin</data><data key="OldId">(DE-588)7761961-4;;;(DE-588)2004272-3</data><data key="SpecType">gik;;;gif</data><data key="Uri">http://d-nb.info/gnd/4005728-8</data><data key="VariantName">Großberlin;;;Groß-Berlin;;;Haupt- und Residenz-Stadt Berlin;;;Reichshauptstadt Berlin;;;Berlino;;;Berolino;;;Stadtgemeinde Berlin;;;Hauptstadt Berlin;;;Birlīn;;;Barlīn;;;Berolinon;;;Land Berlin;;;Coloniae Brandenburgicae;;;Berlinum;;;Verolino;;;Berolinum;;;Cölln an der Spree;;;Colonia Brandenburgica;;;Colonia Marchica;;;Cöln an der Spree;;;Berlin;;;"Besonderes Gebiet" Berlin;;;Gross-Berlin</data></node>
<node id="Aut5006371_6" labels=":CorpName"><data key="labels">:CorpName</data><data key="DateApproxBegin">1861</data><data key="DateApproxEnd">1884</data><data key="DateOriginal">1861-1884</data><data key="GenType">b</data><data key="Id">(DE-588)5006371-6</data><data key="Name">Deutsche Fortschrittspartei</data><data key="OldId">(DE-588)4011640-2</data><data key="SpecType">kiz</data><data key="Uri">http://d-nb.info/gnd/5006371-6</data><data key="VariantName">Fortschrittspartei;;;DFP</data></node>
<node id="Aut4047194_9" labels=":GeoName"><data key="labels">:GeoName</data><data key="Coordinates"> </data><data key="GenType">g</data><data key="GeoArea">XA-DXDE;;;XA-DE</data><data key="Id">(DE-588)4047194-9</data><data key="Name">Preußen</data><data key="OldId">(DE-588)1086116984;;;(DE-588)35060-6</data><data key="SpecType">gik</data><data key="Uri">http://d-nb.info/gnd/4047194-9</data><data key="VariantName">Brandenburg-Preußen;;;Königreich Preußen;;;Preußische Staaten;;;Preußischer Staat;;;Der Preußische Staat;;;Königlich Preußische Staaten;;;Prusy Królewskie;;;Königlich Preußen;;;Prussia</data></node>
<node id="Aut42375_0" labels=":CorpName"><data key="labels">:CorpName</data><data key="DateApproxBegin">1844</data><data key="DateOriginal">1844-</data><data key="GenType">b</data><data key="Id">(DE-588)42375-0</data><data key="Name">Centralverein für das Wohl der Arbeitenden Klassen</data><data key="OldId">(DE-588)1089039174</data><data key="SpecType">kiz</data><data key="Uri">http://d-nb.info/gnd/42375-0</data><data key="VariantName">Zentralverein für das Wohl der Arbeitenden Klassen;;;Centralverein zum Wohl der Arbeitenden Klassen</data></node>
""")
with open(ocr_data_path +'/'+ 'DocContainsEntEdges.graphml', 'r', encoding='utf8') as file:
for line in file.readlines():
out.write(line)
with open(ocr_data_path +'/'+ 'SameAsEdges.graphml', 'r', encoding='utf8') as file:
for line in file.readlines():
out.write(line)
out.write("""</graph>
</graphml>""")
def merge_all_files(outfile, ocr_data_path, all_data_path):
"""
Merges all files into one graphml file.
---------
outfile : str
Name of output file. Needs to end in .graphml .
ocr_data_path : str
Name of directory which contains the
ocr .graphml files
all_data_path : str
Name of directory which contains all .graphml
files (except the ocr .graphml files)
Returns
-----------
None.
"""
allfiles = [f for f in listdir(all_data_path) if isfile(join(all_data_path, f))]
with open(outfile, 'w', encoding='utf8') as out:
out.write("""<?xml version="1.0" encoding="UTF-8"?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<key id="id" for="node" attr.name="id" attr.type="string"/>
<key id="IdGND" for="node" attr.name="IdGND" attr.type="string"/>
<key id="IdWikidata" for="node" attr.name="IdWikidata" attr.type="string"/>
<key id="OldId" for="node" attr.name="OldId" attr.type="string"/>
<key id="Id" for="node" attr.name="Id" attr.type="string"/>
<key id="Uri" for="node" attr.name="Uri" attr.type="string"/>
<key id="GenType" for="node" attr.name="GenType" attr.type="string"/>
<key id="SpecType" for="node" attr.name="SpecType" attr.type="string"/>
<key id="Source" for="node" attr.name="Source" attr.type="string"/>
<key id="Name" for="node" attr.name="Name" attr.type="string"/>
<key id="IdZDB" for="node" attr.name="IdZDB" attr.type="string"/>
<key id="DateApproxBegin" for="node" attr.name="DateApproxBegin" attr.type="string"/>
<key id="DateStrictBegin" for="node" attr.name="DateStrictBegin" attr.type="string"/>
<key id="VariantName" for="node" attr.name="VariantName" attr.type="string"/>
<key id="Gender" for="node" attr.name="Gender" attr.type="string"/>
<key id="DateOriginal" for="node" attr.name="DateOriginal" attr.type="string"/>
<key id="DateApproxOriginal" for="node" attr.name="DateApproxOriginal" attr.type="string"/>
<key id="DateApproxEnd" for="node" attr.name="DateApproxEnd" attr.type="string"/>
<key id="DateStrictOriginal" for="node" attr.name="DateStrictOriginal" attr.type="string"/>
<key id="DateStrictEnd" for="node" attr.name="DateStrictEnd" attr.type="string"/>
<key id="SubUnit" for="node" attr.name="SubUnit" attr.type="string"/>
<key id="Info" for="node" attr.name="Info" attr.type="string"/>
<key id="Place" for="node" attr.name="Place" attr.type="string"/>
<key id="Date" for="node" attr.name="Date" attr.type="string"/>
<key id="Creator" for="node" attr.name="Creator" attr.type="string"/>
<key id="Medium" for="node" attr.name="Medium" attr.type="string"/>
<key id="Lang" for="node" attr.name="Lang" attr.type="string"/>
<key id="GenSubdiv" for="node" attr.name="GenSubdiv" attr.type="string"/>
<key id="GeoArea" for="node" attr.name="GeoArea" attr.type="string"/>
<key id="Coordinates" for="node" attr.name="Coordinates" attr.type="string"/>
<key id="UriGeonames" for="node" attr.name="UriGeonames" attr.type="string"/>
<key id="Title" for="node" attr.name="Title" attr.type="string"/>
<key id="Genre" for="node" attr.name="Genre" attr.type="string"/>
<key id="labels" for="node" attr.name="labels" attr.type="string"/>
<key id="issue" for="node" attr.name="issue" attr.type="string"/>
<key id="page" for="node" attr.name="page" attr.type="string"/>
<key id="article" for="node" attr.name="article" attr.type="string"/>
<key id="version" for="node" attr.name="version" attr.type="string"/>
<key id="url" for="node" attr.name="url" attr.type="string"/>
<key id="id" for="edge" attr.name="id" attr.type="string"/>
<key id="source" for="edge" attr.name="source" attr.type="string"/>
<key id="target" for="edge" attr.name="target" attr.type="string"/>
<key id="label" for="edge" attr.name="label" attr.type="string"/>
<key id="TypeAddInfo" for="edge" attr.name="TypeAddInfo" attr.type="string"/>
<key id="Sent" for="edge" attr.name="Sent" attr.type="string"/>
<key id="Name" for="edge" attr.name="Name" attr.type="string"/>
<key id="Emb" for="edge" attr.name="Emb" attr.type="string"/>
<key id="Left" for="edge" attr.name="Left" attr.type="string"/>
<key id="Top" for="edge" attr.name="Top" attr.type="string"/>
<key id="Width" for="edge" attr.name="Width" attr.type="string"/>
<key id="Height" for="edge" attr.name="Height" attr.type="string"/>
<key id="WdDateApproxBegin" for="node" attr.name="WdDateApproxBegin" attr.type="string"/>
<key id="WdDateStrictBegin" for="node" attr.name="WdDateStrictBegin" attr.type="string"/>
<key id="WdDateApproxOriginal" for="node" attr.name="WdDateApproxOriginal" attr.type="string"/>
<key id="WdDateStrictOriginal" for="node" attr.name="WdDateStrictOriginal" attr.type="string"/>
<key id="WdDateApproxEnd" for="node" attr.name="WdDateApproxEnd" attr.type="string"/>
<key id="WdDateStrictEnd" for="node" attr.name="WdDateStrictEnd" attr.type="string"/>
<key id="WdGender" for="node" attr.name="WdGender" attr.type="string"/>
<key id="WdPlaceOfBirth" for="node" attr.name="WdPlaceOfBirth" attr.type="string"/>
<key id="WdPlaceOfBirthId" for="node" attr.name="WdPlaceOfBirthId" attr.type="string"/>
<key id="WdPlaceOfDeath" for="node" attr.name="WdPlaceOfDeath" attr.type="string"/>
<key id="WdPlaceOfDeathId" for="node" attr.name="WdPlaceOfDeathId" attr.type="string"/>
<key id="SourceType" for="edge" attr.name="SourceType" attr.type="string"/>
<key id="TempValidity" for="edge" attr.name="TempValidity" attr.type="string"/>
<key id="Source" for="edge" attr.name="Source" attr.type="string"/>
<graph id="G" edgedefault="directed">
""")
######## ADD NODES #############
with open(ocr_data_path +'/'+ 'OCRDocumentNodes090921.graphml', 'r', encoding='utf8') as file:
print("OCRDocumentNodes...")
for line in file.readlines():
out.write(line)
with open(ocr_data_path +'/'+ 'WikiNodes090921.graphml', 'r', encoding='utf8') as file:
print("WikiNodes...")
for line in file.readlines():
out.write(line)
######## ADD ISIL NODES ###########
with open("D:/SoNAR/Transformers/data/IsilNodes.graphml", 'r', encoding='utf8') as file:
print("Isil Nodes...")
for line in file.readlines():
out.write(line)
for f in allfiles:
if "Nodes" in f:
print(f)
file = open(all_data_path +'/'+ f, 'r', encoding='utf8')
for line in file.readlines():
out.write(line)
file.close()
for f in allfiles:
if "ChronTerm" in f:
print(f)
file = open(all_data_path +'/'+ f, 'r', encoding='utf8')
for line in file.readlines():
out.write(line)
file.close()
####### ADD ALL EDGES #############
for f in allfiles:
if "Edges" in f:
print(f)
file = open(all_data_path +'/'+ f, 'r', encoding='utf8')
for line in file.readlines():
out.write(line)
file.close()
with open(ocr_data_path +'/'+ 'DocContainsEntEdges090921.graphml', 'r', encoding='utf8') as file:
print("DocContainsEntEdges...")
for line in file.readlines():
out.write(line)
with open(ocr_data_path +'/'+ 'SameAsEdges090921.graphml', 'r', encoding='utf8') as file:
print("SameAsEdges...")
for line in file.readlines():
out.write(line)
out.write("""</graph>
</graphml>""")
def merge_except_ocr(outfile, all_data_path):
"""
Merges all files into one graphml file.
---------
outfile : str
Name of output file. Needs to end in .graphml .
all_data_path : str
Name of directory which contains all .graphml
files (except the ocr .graphml files)
Returns
-----------
None.
"""
allfiles = [f for f in listdir(all_data_path) if isfile(join(all_data_path, f))]
with open(outfile, 'w', encoding='utf8') as out:
out.write("""<?xml version="1.0" encoding="UTF-8"?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<key id="id" for="node" attr.name="id" attr.type="string"/>
<key id="Id" for="node" attr.name="Id" attr.type="string"/>
<key id="OldId" for="node" attr.name="OldId" attr.type="string"/>
<key id="Uri" for="node" attr.name="Uri" attr.type="string"/>
<key id="GenType" for="node" attr.name="GenType" attr.type="string"/>
<key id="SpecType" for="node" attr.name="SpecType" attr.type="string"/>
<key id="Name" for="node" attr.name="Name" attr.type="string"/>
<key id="VariantName" for="node" attr.name="VariantName" attr.type="string"/>
<key id="Gender" for="node" attr.name="Gender" attr.type="string"/>
<key id="DateOriginal" for="node" attr.name="DateOriginal" attr.type="string"/>
<key id="DateApproxOriginal" for="node" attr.name="DateApproxOriginal" attr.type="string"/>
<key id="DateApproxBegin" for="node" attr.name="DateApproxBegin" attr.type="string"/>
<key id="DateApproxEnd" for="node" attr.name="DateApproxEnd" attr.type="string"/>
<key id="DateStrictOriginal" for="node" attr.name="DateStrictOriginal" attr.type="string"/>
<key id="DateStrictBegin" for="node" attr.name="DateStrictBegin" attr.type="string"/>
<key id="DateStrictEnd" for="node" attr.name="DateStrictEnd" attr.type="string"/>
<key id="SubUnit" for="node" attr.name="SubUnit" attr.type="string"/>
<key id="Info" for="node" attr.name="Info" attr.type="string"/>
<key id="Place" for="node" attr.name="Place" attr.type="string"/>
<key id="Date" for="node" attr.name="Date" attr.type="string"/>
<key id="Creator" for="node" attr.name="Creator" attr.type="string"/>
<key id="Medium" for="node" attr.name="Medium" attr.type="string"/>
<key id="Lang" for="node" attr.name="Lang" attr.type="string"/>
<key id="GenSubdiv" for="node" attr.name="GenSubdiv" attr.type="string"/>
<key id="GeoArea" for="node" attr.name="GeoArea" attr.type="string"/>
<key id="Coordinates" for="node" attr.name="Coordinates" attr.type="string"/>
<key id="UriGeonames" for="node" attr.name="UriGeonames" attr.type="string"/>
<key id="Title" for="node" attr.name="Title" attr.type="string"/>
<key id="Genre" for="node" attr.name="Genre" attr.type="string"/>
<key id="labels" for="node" attr.name="labels" attr.type="string"/>
<key id="label" for="edge" attr.name="label" attr.type="string"/>
<key id="id" for="edge" attr.name="id" attr.type="string"/>
<key id="SourceType" for="edge" attr.name="SourceType" attr.type="string"/>
<key id="TypeAddInfo" for="edge" attr.name="TypeAddInfo" attr.type="string"/>
<key id="TempValidity" for="edge" attr.name="TempValidity" attr.type="string"/>
<key id="Source" for="edge" attr.name="Source" attr.type="string"/>
<graph id="G" edgedefault="directed">
""")
######## ADD NODES ###########
with open("D:/SoNAR/Transformers/data/IsilNodes.graphml", 'r', encoding='utf8') as file:
for line in file.readlines():
out.write(line)
for f in allfiles:
if "Nodes" in f:
print(f)
file = open('graphml/'+ f, 'r', encoding='utf8')
for line in file.readlines():
out.write(line)
file.close()
for f in allfiles:
if "ChronTerm" in f:
print(f)
file = open(all_data_path +'/'+ f, 'r', encoding='utf8')
for line in file.readlines():
out.write(line)
file.close()
####### ADD EDGES #############
for f in allfiles:
if "Edges" in f:
print(f)
file = open('graphml/'+ f, 'r', encoding='utf8')
for line in file.readlines():
out.write(line)
file.close()
out.write("""</graph>
</graphml>""")
def replace_spec_char(file, outfile):
"""
Rewrites merged file to replace the special
character "&" (leading to Import Errors) with "&".
---------
file : str
Name + path of merged file.
outfile : str
Name + path of output file
Returns
-----------
None.
"""
with open(file, 'r', encoding="utf-8") as inp, open(outfile, "w", encoding="utf-8") as out:
count = 0
for line in inp:
count += 1
newline = line
if "&" in line:
newline = line.replace("amp;", "").replace("#38;","")
newline = newline.replace("&", "&")
out.write(newline)
# replace_spec_char("D:/SoNAR/Transformers/data/merged/all_v090921.graphml", "D:/SoNAR/Transformers/data/testfile.graphml")
if __name__=='__main__':
fire.Fire()
# merge_all_files("D:/SoNAR/Transformers/data/merged/merged280921_withspecchar.graphml", "D:/SoNAR/Transformers/data/ocr", "D:/SoNAR/Transformers/data/graphml")
# replace_spec_char("D:/SoNAR/Transformers/data/merged/merged280921_withspecchar.graphml", "D:/SoNAR/Transformers/data/merged280921.graphml") | 61.722527 | 931 | 0.649174 | 3,123 | 22,467 | 4.645533 | 0.090618 | 0.058244 | 0.159223 | 0.189826 | 0.877033 | 0.850358 | 0.837055 | 0.828164 | 0.821616 | 0.811414 | 0 | 0.017074 | 0.144924 | 22,467 | 364 | 932 | 61.722527 | 0.738119 | 0.074598 | 0 | 0.878472 | 0 | 0.597222 | 0.789337 | 0.206163 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013889 | false | 0 | 0.010417 | 0 | 0.024306 | 0.038194 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
86d2f33c19ccdb95c429be84eaf880925c3b05d7 | 18,646 | py | Python | release/stubs.min/Grasshopper/Kernel/Undo/Actions.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 182 | 2017-06-27T02:26:15.000Z | 2022-03-30T18:53:43.000Z | release/stubs.min/Grasshopper/Kernel/Undo/Actions.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 28 | 2017-06-27T13:38:23.000Z | 2022-03-15T11:19:44.000Z | release/stubs.min/Grasshopper/Kernel/Undo/Actions.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 67 | 2017-06-28T09:43:59.000Z | 2022-03-20T21:17:10.000Z | # encoding: utf-8
# module Grasshopper.Kernel.Undo.Actions calls itself Actions
# from Grasshopper,Version=1.0.0.20,Culture=neutral,PublicKeyToken=dda4f5ec2cd80803
# by generator 1.145
""" NamespaceTracker represent a CLS namespace. """
# no imports
# no functions
# classes
class GH_AddObjectAction(GH_UndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_AddObjectAction(obj: IGH_DocumentObject) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_AddObjectAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_AddObjectAction,doc: GH_Document) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_DocumentObject) """
pass
ExpiresSolution=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: ExpiresSolution(self: GH_AddObjectAction) -> bool
"""
class GH_AddStateAction(GH_ArchivedUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_AddStateAction(index: int,state: GH_State) """
def Deserialize(self,*args):
""" Deserialize(self: GH_ArchivedUndoAction,obj: GH_ISerializable) """
pass
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_AddStateAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_AddStateAction,doc: GH_Document) """
pass
def Read(self,reader):
""" Read(self: GH_AddStateAction,reader: GH_IReader) -> bool """
pass
def Serialize(self,*args):
""" Serialize(self: GH_ArchivedUndoAction,obj: GH_ISerializable) """
pass
def SerializeToByteArray(self,*args):
""" SerializeToByteArray(self: GH_ArchivedUndoAction,obj: GH_ISerializable) -> Array[Byte] """
pass
def Write(self,writer):
""" Write(self: GH_AddStateAction,writer: GH_IWriter) -> bool """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,index,state):
""" __new__(cls: type,index: int,state: GH_State) """
pass
m_data=None
class GH_DataMatchingAction(GH_ObjectUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_DataMatchingAction(obj: IGH_Component) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Object_Redo(self,*args):
""" Object_Redo(self: GH_DataMatchingAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def Object_Undo(self,*args):
""" Object_Undo(self: GH_DataMatchingAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_Component) """
pass
ExpiresSolution=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: ExpiresSolution(self: GH_DataMatchingAction) -> bool
"""
class GH_DataModificationAction(GH_ObjectUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_DataModificationAction(obj: IGH_Param) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Object_Redo(self,*args):
""" Object_Redo(self: GH_DataModificationAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def Object_Undo(self,*args):
""" Object_Undo(self: GH_DataModificationAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_Param) """
pass
ExpiresSolution=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: ExpiresSolution(self: GH_DataModificationAction) -> bool
"""
class GH_GenericObjectAction(GH_ArchivedUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_GenericObjectAction(obj: IGH_DocumentObject) """
def Deserialize(self,*args):
""" Deserialize(self: GH_ArchivedUndoAction,obj: GH_ISerializable) """
pass
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_GenericObjectAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_GenericObjectAction,doc: GH_Document) """
pass
def Serialize(self,*args):
""" Serialize(self: GH_ArchivedUndoAction,obj: GH_ISerializable) """
pass
def SerializeToByteArray(self,*args):
""" SerializeToByteArray(self: GH_ArchivedUndoAction,obj: GH_ISerializable) -> Array[Byte] """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_DocumentObject) """
pass
ExpiresSolution=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: ExpiresSolution(self: GH_GenericObjectAction) -> bool
"""
m_data=None
class GH_HiddenAction(GH_ObjectUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_HiddenAction(obj: IGH_ActiveObject) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Object_Redo(self,*args):
""" Object_Redo(self: GH_HiddenAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def Object_Undo(self,*args):
""" Object_Undo(self: GH_HiddenAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_ActiveObject) """
pass
ExpiresDisplay=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: ExpiresDisplay(self: GH_HiddenAction) -> bool
"""
class GH_IconDisplayAction(GH_ObjectUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_IconDisplayAction(obj: IGH_DocumentObject) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Object_Redo(self,*args):
""" Object_Redo(self: GH_IconDisplayAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def Object_Undo(self,*args):
""" Object_Undo(self: GH_IconDisplayAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_DocumentObject) """
pass
class GH_IconOverrideAction(GH_ObjectUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_IconOverrideAction(obj: IGH_DocumentObject) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Object_Redo(self,*args):
""" Object_Redo(self: GH_IconOverrideAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def Object_Undo(self,*args):
""" Object_Undo(self: GH_IconOverrideAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_DocumentObject) """
pass
class GH_LayoutAction(GH_ObjectUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_LayoutAction(obj: IGH_DocumentObject) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Object_Redo(self,*args):
""" Object_Redo(self: GH_LayoutAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def Object_Undo(self,*args):
""" Object_Undo(self: GH_LayoutAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_DocumentObject) """
pass
class GH_LockedAction(GH_ObjectUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_LockedAction(obj: IGH_ActiveObject) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Object_Redo(self,*args):
""" Object_Redo(self: GH_LockedAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def Object_Undo(self,*args):
""" Object_Undo(self: GH_LockedAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_ActiveObject) """
pass
ExpiresSolution=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: ExpiresSolution(self: GH_LockedAction) -> bool
"""
class GH_NickNameAction(GH_ObjectUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_NickNameAction(obj: IGH_DocumentObject) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Object_Redo(self,*args):
""" Object_Redo(self: GH_NickNameAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def Object_Undo(self,*args):
""" Object_Undo(self: GH_NickNameAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_DocumentObject) """
pass
class GH_PersistentDataAction(GH_ObjectUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_PersistentDataAction[T](obj: GH_PersistentParam[T]) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Object_Redo(self,*args):
""" Object_Redo(self: GH_PersistentDataAction[T],doc: GH_Document,obj: IGH_DocumentObject) """
pass
def Object_Undo(self,*args):
""" Object_Undo(self: GH_PersistentDataAction[T],doc: GH_Document,obj: IGH_DocumentObject) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: GH_PersistentParam[T]) """
pass
ExpiresSolution=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: ExpiresSolution(self: GH_PersistentDataAction[T]) -> bool
"""
class GH_PivotAction(GH_ObjectUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_PivotAction(obj: IGH_DocumentObject) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Object_Redo(self,*args):
""" Object_Redo(self: GH_PivotAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def Object_Undo(self,*args):
""" Object_Undo(self: GH_PivotAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_DocumentObject) """
pass
class GH_RemoveObjectAction(GH_ArchivedUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_RemoveObjectAction(obj: IGH_DocumentObject) """
def Deserialize(self,*args):
""" Deserialize(self: GH_ArchivedUndoAction,obj: GH_ISerializable) """
pass
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_RemoveObjectAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_RemoveObjectAction,doc: GH_Document) """
pass
def Serialize(self,*args):
""" Serialize(self: GH_ArchivedUndoAction,obj: GH_ISerializable) """
pass
def SerializeToByteArray(self,*args):
""" SerializeToByteArray(self: GH_ArchivedUndoAction,obj: GH_ISerializable) -> Array[Byte] """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_DocumentObject) """
pass
ExpiresSolution=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: ExpiresSolution(self: GH_RemoveObjectAction) -> bool
"""
m_data=None
class GH_RemoveStateAction(GH_ArchivedUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_RemoveStateAction(index: int,state: GH_State) """
def Deserialize(self,*args):
""" Deserialize(self: GH_ArchivedUndoAction,obj: GH_ISerializable) """
pass
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_RemoveStateAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_RemoveStateAction,doc: GH_Document) """
pass
def Read(self,reader):
""" Read(self: GH_RemoveStateAction,reader: GH_IReader) -> bool """
pass
def Serialize(self,*args):
""" Serialize(self: GH_ArchivedUndoAction,obj: GH_ISerializable) """
pass
def SerializeToByteArray(self,*args):
""" SerializeToByteArray(self: GH_ArchivedUndoAction,obj: GH_ISerializable) -> Array[Byte] """
pass
def Write(self,writer):
""" Write(self: GH_RemoveStateAction,writer: GH_IWriter) -> bool """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,index,state):
""" __new__(cls: type,index: int,state: GH_State) """
pass
m_data=None
class GH_WireAction(GH_UndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_WireAction(param: IGH_Param) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_WireAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_WireAction,doc: GH_Document) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,param):
""" __new__(cls: type,param: IGH_Param) """
pass
ExpiresSolution=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: ExpiresSolution(self: GH_WireAction) -> bool
"""
class GH_WireDisplayAction(GH_ObjectUndoAction,IGH_UndoAction,GH_ISerializable):
""" GH_WireDisplayAction(obj: IGH_Param) """
def Internal_Redo(self,*args):
""" Internal_Redo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Internal_Undo(self,*args):
""" Internal_Undo(self: GH_ObjectUndoAction,doc: GH_Document) """
pass
def Object_Redo(self,*args):
""" Object_Redo(self: GH_WireDisplayAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def Object_Undo(self,*args):
""" Object_Undo(self: GH_WireDisplayAction,doc: GH_Document,obj: IGH_DocumentObject) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,obj):
""" __new__(cls: type,obj: IGH_Param) """
pass
| 39.090147 | 215 | 0.721281 | 2,283 | 18,646 | 5.35392 | 0.045992 | 0.055633 | 0.05956 | 0.079277 | 0.904852 | 0.904852 | 0.901579 | 0.835147 | 0.81993 | 0.81993 | 0 | 0.001119 | 0.137456 | 18,646 | 476 | 216 | 39.172269 | 0.758876 | 0.542261 | 0 | 0.926641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.409266 | false | 0.409266 | 0 | 0 | 0.525097 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 10 |
86f6382400dae52be21b39546ded69c3b85bd609 | 88 | py | Python | src/models/__init__.py | SahilJ97/Explainable-Stance-Detection | aa69b1f5db38d3ac15d104223ee089ddbd3c6773 | [
"MIT"
] | 1 | 2021-12-12T03:03:12.000Z | 2021-12-12T03:03:12.000Z | src/models/__init__.py | SahilJ97/Explainable-Stance-Detection | aa69b1f5db38d3ac15d104223ee089ddbd3c6773 | [
"MIT"
] | 1 | 2021-11-08T15:32:44.000Z | 2021-11-08T15:38:17.000Z | src/models/__init__.py | SahilJ97/Explainable-Stance-Detection | aa69b1f5db38d3ac15d104223ee089ddbd3c6773 | [
"MIT"
] | null | null | null | from src.models.bert_basic import BertBasic
from src.models.bert_joint import BertJoint
| 29.333333 | 43 | 0.863636 | 14 | 88 | 5.285714 | 0.642857 | 0.189189 | 0.351351 | 0.459459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 88 | 2 | 44 | 44 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
812bd9ce6364e5c41d1b2a1c1b4dc11ac49605c4 | 132 | py | Python | python/testData/refactoring/pullup/instanceNotDeclaredInInit.after.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/refactoring/pullup/instanceNotDeclaredInInit.after.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/pullup/instanceNotDeclaredInInit.after.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | class Parent(object):
def __init__(self):
self.foo = 12
class Child(Parent):
def foo(self):
self.foo = 12
| 14.666667 | 23 | 0.575758 | 18 | 132 | 4 | 0.5 | 0.222222 | 0.305556 | 0.361111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.30303 | 132 | 8 | 24 | 16.5 | 0.73913 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
d48bfad667945c14cb6d0886f7383eb1b15331cc | 59,534 | py | Python | tests/app/test_works.py | gwu-libraries/orcid2vivo | 14c4c8ebb828d862261324a13616aad1f2f0c721 | [
"MIT"
] | 12 | 2015-04-23T19:09:03.000Z | 2019-12-02T18:49:41.000Z | tests/app/test_works.py | gwu-libraries/orcid2vivo | 14c4c8ebb828d862261324a13616aad1f2f0c721 | [
"MIT"
] | 26 | 2015-05-01T13:51:54.000Z | 2015-11-05T16:36:49.000Z | tests/app/test_works.py | gwu-libraries/orcid2vivo | 14c4c8ebb828d862261324a13616aad1f2f0c721 | [
"MIT"
] | 11 | 2015-05-01T21:53:24.000Z | 2020-12-10T15:38:48.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from unittest import TestCase
import json
from orcid2vivo_app.works import WorksCrosswalk
import orcid2vivo_app.vivo_namespace as ns
from rdflib import Graph, Literal, RDFS, RDF
from orcid2vivo_app.vivo_namespace import VIVO
from orcid2vivo_app.vivo_uri import HashIdentifierStrategy
from orcid2vivo import SimpleCreateEntitiesStrategy
# Saving this because will be monkey patching
orig_fetch_crossref_doi = WorksCrosswalk._fetch_crossref_doi
# curl -H "Accept: application/json" https://pub.orcid.org/v2.0/0000-0003-3441-946X/work/15628639 | jq '.' | pbcopy
class TestWorks(TestCase):
def setUp(self):
self.graph = Graph(namespace_manager=ns.ns_manager)
self.person_uri = ns.D["test"]
self.create_strategy = SimpleCreateEntitiesStrategy(HashIdentifierStrategy(), person_uri=self.person_uri)
WorksCrosswalk._fetch_crossref_doi = staticmethod(orig_fetch_crossref_doi)
self.crosswalker = WorksCrosswalk(identifier_strategy=self.create_strategy,
create_strategy=self.create_strategy)
def test_no_works(self):
orcid_profile = json.loads("""
{
"activities-summary": {
"works": {
"group": []
}
}
}
""")
self.crosswalker.crosswalk(orcid_profile, self.person_uri, self.graph)
self.assertEqual(0, len(self.graph))
def test_only_handled_work_types(self):
work_profile = json.loads("""
{
"type": "NOT_A_BOOK_CHAPTER"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "nobody", self.graph)
self.assertEqual(0, len(self.graph))
def test_authorship(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643392",
"title": {
"title": {
"value": "Persistent identifiers can improve provenance and attribution and encourage sharing of research results"
},
"subtitle": null,
"translated-title": null
},
"type": "JOURNAL_ARTICLE"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertEqual(1, len(list(self.graph[: RDF["type"]: VIVO["Authorship"]])))
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc rdfs:label "Persistent identifiers can improve provenance and attribution and encourage sharing of research results" .
?auth a vivo:Authorship .
?auth vivo:relates ?doc, d:test .
}
""")))
def test_orcid_title_no_subtitle(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?"
},
"subtitle": null,
"translated-title": null
},
"type": "JOURNAL_ARTICLE"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(list(self.graph[: RDFS["label"]: Literal(
"Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for "
"physician investigators?")]))
def test_orcid_title_and_subtitle(self):
work_profile = json.loads("""
{
"path": "/0000-0003-3441-946X/work/15628639",
"title": {
"title": {
"value": "Substance use disorder among people with first-episode psychosis"
},
"subtitle": {
"value": "A systematic review of course and treatment"
},
"translated-title": null
},
"type": "JOURNAL_ARTICLE"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Wisdom", self.graph)
self.assertTrue(list(self.graph[: RDFS["label"]: Literal(
"Substance use disorder among people with first-episode psychosis: A systematic review of course and "
"treatment")]))
def test_bibtex_title(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Not the title"
},
"subtitle": null,
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@article{Haak2012,title = {Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?},journal = {Academic Medicine},year = {2012},volume = {87},number = {11},pages = {1516-1524},author = {Ginther, D.K. and Haak, L.L. and Schaffer, W.T. and Kington, R.}}"
},
"type": "JOURNAL_ARTICLE"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(list(self.graph[: RDFS["label"]: Literal(
"Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for "
"physician investigators?")]))
def test_crossref_title(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Not the title"
},
"subtitle": null,
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@article{Haak2012,title = {Not the title},journal = {Academic Medicine},year = {2012},volume = {87},number = {11},pages = {1516-1524},author = {Ginther, D.K. and Haak, L.L. and Schaffer, W.T. and Kington, R.}}"
},
"type": "JOURNAL_ARTICLE",
"external-ids": {
"external-id": [
{
"external-id-type": "doi",
"external-id-value": "10.1097/ACM.0b013e31826d726b",
"external-id-url": null,
"external-id-relationship": "SELF"
},
{
"external-id-type": "eid",
"external-id-value": "2-s2.0-84869886841",
"external-id-url": null,
"external-id-relationship": "SELF"
}
]
}
}
""")
WorksCrosswalk._fetch_crossref_doi = staticmethod(lambda doi: json.loads("""
{
"title": [
"Are Race, Ethnicity, and Medical School Affiliation Associated With NIH R01 Type 1 Award Probability for Physician Investigators?"
]
}
"""))
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(list(self.graph[: RDFS["label"]: Literal(
"Are Race, Ethnicity, and Medical School Affiliation Associated With NIH R01 Type 1 Award Probability for "
"Physician Investigators?")]))
def test_bibtex_publisher(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5014-4975/work/13266925",
"short-description": null,
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@book{Hardt_2014,doi = {10.1093/acprof:oso/9780199337118.001.0001},url = {http://dx.doi.org/10.1093/acprof:oso/9780199337118.001.0001},year = 2014,month = {Feb},publisher = {Oxford University Press},author = {Heidi Hardt},title = {Time to React}}"
},
"type": "BOOK"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Hardt", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Book .
?doc vivo:publisher ?pub .
?pub a foaf:Organization .
?pub rdfs:label "Oxford University Press" .
}
""")))
def test_crossref_publisher(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643382",
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@article{Haak2012,title = {Standards and infrastructure for innovation data exchange},journal = {Science},year = {2012},volume = {338},number = {6104},pages = {196-197},author = {Haak, L.L. and Baker, D. and Ginther, D.K. and Gordon, G.J. and Probus, M.A. and Kannankutty, N. and Weinberg, B.A.}}"
},
"type": "JOURNAL_ARTICLE",
"external-ids": {
"external-id": [
{
"external-id-type": "doi",
"external-id-value": "10.1126/science.1221840",
"external-id-url": null,
"external-id-relationship": "SELF"
},
{
"external-id-type": "eid",
"external-id-value": "2-s2.0-84867318319",
"external-id-url": null,
"external-id-relationship": "SELF"
}
]
}
}
""")
WorksCrosswalk._fetch_crossref_doi = staticmethod(lambda doi: json.loads("""
{
"publisher": "Ovid Technologies (Wolters Kluwer Health)"
}
"""))
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc vivo:publisher ?pub .
?pub a foaf:Organization .
?pub rdfs:label "Ovid Technologies (Wolters Kluwer Health)" .
}
""")))
def test_bibtex_volume_and_number_and_pages(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5003-0230/work/13540323",
"type": "BOOK",
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@article { viladomat1997,title = {Narcissus alkaloids},journal = {Studies in Natural Products Chemistry},year = {1997},volume = {20},number = {PART F},pages = {323-405},author = {Bastida, J. and Viladomat, F. and Codina, C.}}"
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Viladomat", self.graph)
self.assertTrue(list(self.graph[: ns.BIBO["volume"]: Literal("20")]))
self.assertTrue(list(self.graph[: ns.BIBO["issue"]: Literal("PART F")]))
self.assertTrue(list(self.graph[: ns.BIBO["pageStart"]: Literal("323")]))
self.assertTrue(list(self.graph[: ns.BIBO["pageEnd"]: Literal("405")]))
def test_crossref_volume_and_number_and_pages(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/8289794",
"type": "JOURNAL_ARTICLE",
"title": {
"title": {
"value": "Are Race, Ethnicity, and Medical School Affiliation Associated With NIH R01 Type 1 Award Probability for Physician Investigators?"
},
"subtitle": null,
"translated-title": null
},
"external-ids": {
"external-id": [
{
"external-id-type": "doi",
"external-id-value": "10.1097/acm.0b013e31826d726b",
"external-id-url": null,
"external-id-relationship": "SELF"
}
]
}
}
""")
WorksCrosswalk._fetch_crossref_doi = staticmethod(lambda doi: json.loads("""
{
"page": "1516-1524",
"issue": "87",
"volume": "11"
}
"""))
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(list(self.graph[: ns.BIBO["volume"]: Literal("11")]))
self.assertTrue(list(self.graph[: ns.BIBO["issue"]: Literal("87")]))
self.assertTrue(list(self.graph[: ns.BIBO["pageStart"]: Literal("1516")]))
self.assertTrue(list(self.graph[: ns.BIBO["pageEnd"]: Literal("1524")]))
def test_orcid_pubdate(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/8289794",
"type": "JOURNAL_ARTICLE",
"title": {
"title": {
"value": "Are Race, Ethnicity, and Medical School Affiliation Associated With NIH R01 Type 1 Award Probability for Physician Investigators?"
},
"subtitle": null,
"translated-title": null
},
"publication-date": {
"year": {
"value": "2013"
},
"month": {
"value": "11"
},
"day": {
"value": "01"
},
"media-type": null
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc vivo:dateTimeValue ?dt .
?dt a vivo:DateTimeValue .
?dt rdfs:label "November 1, 2013" .
?dt vivo:dateTime "2013-11-01T00:00:00"^^xsd:dateTime .
?dt vivo:dateTimePrecision vivo:yearMonthDayPrecision .
}
""")))
def test_orcid_pubdate_year_only(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/8289794",
"type": "JOURNAL_ARTICLE",
"title": {
"title": {
"value": "Are Race, Ethnicity, and Medical School Affiliation Associated With NIH R01 Type 1 Award Probability for Physician Investigators?"
},
"subtitle": null,
"translated-title": null
},
"publication-date": {
"year": {
"value": "2013"
},
"month": null,
"day": null,
"media-type": null
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc vivo:dateTimeValue ?dt .
?dt a vivo:DateTimeValue .
?dt rdfs:label "2013" .
?dt vivo:dateTime "2013-01-01T00:00:00"^^xsd:dateTime .
?dt vivo:dateTimePrecision vivo:yearPrecision .
}
""")))
def test_crossref_pubdate(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/8289794",
"type": "JOURNAL_ARTICLE",
"title": {
"title": {
"value": "Are Race, Ethnicity, and Medical School Affiliation Associated With NIH R01 Type 1 Award Probability for Physician Investigators?"
},
"subtitle": null,
"translated-title": null
},
"publication-date": {
"year": {
"value": "2013"
},
"month": null,
"day": null,
"media-type": null
},
"external-ids": {
"external-id": [
{
"external-id-type": "doi",
"external-id-value": "10.1097/acm.0b013e31826d726b",
"external-id-url": null,
"external-id-relationship": "SELF"
}
]
}
}
""")
WorksCrosswalk._fetch_crossref_doi = staticmethod(lambda doi: json.loads("""
{
"issued":{"date-parts":[[2012,10,31]]}
}
"""))
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc vivo:dateTimeValue ?dt .
?dt a vivo:DateTimeValue .
?dt rdfs:label "October 31, 2012" .
?dt vivo:dateTime "2012-10-31T00:00:00"^^xsd:dateTime .
?dt vivo:dateTimePrecision vivo:yearMonthDayPrecision .
}
""")))
def test_crossref_pubdate_year_only(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/8289794",
"type": "JOURNAL_ARTICLE",
"title": {
"title": {
"value": "Are Race, Ethnicity, and Medical School Affiliation Associated With NIH R01 Type 1 Award Probability for Physician Investigators?"
},
"subtitle": null,
"translated-title": null
},
"publication-date": {
"year": {
"value": "2013"
},
"month": null,
"day": null,
"media-type": null
},
"external-ids": {
"external-id": [
{
"external-id-type": "doi",
"external-id-value": "10.1097/acm.0b013e31826d726b",
"external-id-url": null,
"external-id-relationship": "SELF"
}
]
}
}
""")
WorksCrosswalk._fetch_crossref_doi = staticmethod(lambda doi: json.loads("""
{
"issued":{"date-parts":[[2012]]}
}
"""))
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc vivo:dateTimeValue ?dt .
?dt a vivo:DateTimeValue .
?dt rdfs:label "2012" .
?dt vivo:dateTime "2012-01-01T00:00:00"^^xsd:dateTime .
?dt vivo:dateTimePrecision vivo:yearPrecision .
}
""")))
def test_bibtex_pubdate(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5000-0736/work/11557873",
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@article { chichorro2014,title = {Chronological link between deep-seated processes in magma chambers and eruptions: Permo-Carboniferous magmatism in the core of Pangaea (Southern Pyrenees)},journal = {Gondwana Research},year = {2014},volume = {25},number = {1},pages = {290-308},author = {Pereira, M.F. and Castro, A. and Chichorro, M. and Fernández, C. and Díaz-Alvarado, J. and Martí, J. and Rodríguez, C.}}"
},
"type": "JOURNAL_ARTICLE"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Chichorro", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc vivo:dateTimeValue ?dt .
?dt a vivo:DateTimeValue .
?dt rdfs:label "2014" .
?dt vivo:dateTime "2014-01-01T00:00:00"^^xsd:dateTime .
?dt vivo:dateTimePrecision vivo:yearPrecision .
}
""")))
def test_bibtex_pubdate_in_press(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5000-0736/work/11557873",
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@article { chichorro2014,title = {Chronological link between deep-seated processes in magma chambers and eruptions: Permo-Carboniferous magmatism in the core of Pangaea (Southern Pyrenees)},journal = {Gondwana Research},year = {in press},volume = {25},number = {1},pages = {290-308}}"
},
"type": "JOURNAL_ARTICLE"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Chichorro", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
filter not exists {
?doc vivo:dateTimeValue ?dt .
}
}
""")))
def test_book_chapter(self):
work_profile = json.loads("""
{
"path": "/0000-0002-9446-437X/work/35153466",
"title": {
"title": {
"value": "Numerical Methods for Multi-term Fractional Boundary Value Problems"
},
"subtitle": null,
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@incollection{\\nyear={2013},\\nisbn={978-1-4614-7332-9},\\nbooktitle={Differential and Difference Equations with Applications},\\nvolume={47},\\nseries={Springer Proceedings in Mathematics & Statistics},\\neditor={Pinelas, Sandra and Chipot, Michel and Dosla, Zuzana},\\ndoi={10.1007/978-1-4614-7333-6_48},\\ntitle={Numerical Methods for Multi-term Fractional Boundary Value Problems},\\nurl={http://dx.doi.org/10.1007/978-1-4614-7333-6_48},\\npublisher={Springer New York},\\npages={535-542},\\nlanguage={English}\\n}\\n"
},
"type": "BOOK_CHAPTER"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Ford", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Chapter .
?doc rdfs:label "Numerical Methods for Multi-term Fractional Boundary Value Problems" .
?doc bibo:pageStart "535" .
?doc bibo:pageEnd "542" .
?doc vivo:hasPublicationVenue ?pv .
?pv a bibo:Book .
?pv rdfs:label "Differential and Difference Equations with Applications" .
}
""")))
def test_crossref_subject(self):
work_profile = json.loads("""
{
"path": "/0000-0003-3441-946X/work/15628641",
"title": {
"title": {
"value": "Substance abuse treatment programs' data management capacity: An exploratory study"
},
"subtitle": null,
"translated-title": null
},
"type": "JOURNAL_ARTICLE",
"external-ids": {
"external-id": [
{
"external-id-type": "doi",
"external-id-value": "10.1007/s11414-010-9221-z",
"external-id-url": {
"value": ""
},
"external-id-relationship": "SELF"
},
{
"external-id-type": "eid",
"external-id-value": "2-s2.0-79955719929",
"external-id-url": {
"value": ""
},
"external-id-relationship": "SELF"
}
]
}
}
""")
WorksCrosswalk._fetch_crossref_doi = staticmethod(lambda doi: json.loads("""
{
"subject":["Health(social science)","Public Health, Environmental and Occupational Health","Health Policy"]
}
"""))
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Wisdom", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc vivo:hasSubjectArea ?sub1 .
?sub1 a skos:Concept .
?sub1 rdfs:label "Health(social science)" .
?doc vivo:hasSubjectArea ?sub2 .
?sub2 a skos:Concept .
?sub2 rdfs:label "Public Health, Environmental and Occupational Health" .
?doc vivo:hasSubjectArea ?sub3 .
?sub3 a skos:Concept .
?sub3 rdfs:label "Health Policy" .
}
""")))
def test_crossref_authors(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/8289794",
"type": "JOURNAL_ARTICLE",
"title": {
"title": {
"value": "Are Race, Ethnicity, and Medical School Affiliation Associated With NIH R01 Type 1 Award Probability for Physician Investigators?"
},
"subtitle": null,
"translated-title": null
},
"publication-date": {
"year": {
"value": "2013"
},
"month": null,
"day": null,
"media-type": null
},
"external-ids": {
"external-id": [
{
"external-id-type": "doi",
"external-id-value": "10.1097/acm.0b013e31826d726b",
"external-id-url": null,
"external-id-relationship": "SELF"
}
]
}
}
""")
WorksCrosswalk._fetch_crossref_doi = staticmethod(lambda doi: json.loads("""
{
"author":[{"affiliation":[],"family":"Ginther","given":"Donna K."},{"affiliation":[],"family":"Haak","given":"Laurel L."},{"affiliation":[],"family":"Schaffer","given":"Walter T."},{"affiliation":[],"family":"Kington","given":"Raynard"}]
}
"""))
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?auth1 a vivo:Authorship .
?auth1 vivo:relates ?doc, d:test.
?auth2 a vivo:Authorship .
?auth2 vivo:relates ?doc, ?per2 .
?per2 a foaf:Person .
?per2 rdfs:label "Walter T. Schaffer" .
?auth3 a vivo:Authorship .
?auth3 vivo:relates ?doc, ?per2 .
?per3 a foaf:Person .
?per3 rdfs:label "Donna K. Ginther" .
?auth4 a vivo:Authorship .
?auth4 vivo:relates ?doc, ?per2 .
?per4 a foaf:Person .
?per4 rdfs:label "Raynard Kington" .
}
""")))
def test_orcid_authors_reversed(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?"
},
"subtitle": null,
"translated-title": null
},
"type": "JOURNAL_ARTICLE",
"contributors": {
"contributor": [
{
"contributor-orcid": null,
"credit-name": {
"value": "Ginther, D.K."
},
"contributor-email": null,
"contributor-attributes": null
},
{
"contributor-orcid": null,
"credit-name": {
"value": "Haak, L.L."
},
"contributor-email": null,
"contributor-attributes": null
},
{
"contributor-orcid": null,
"credit-name": {
"value": "Schaffer, W.T."
},
"contributor-email": null,
"contributor-attributes": null
},
{
"contributor-orcid": null,
"credit-name": {
"value": "Kington, R."
},
"contributor-email": null,
"contributor-attributes": null
}
]
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?auth1 a vivo:Authorship .
?auth1 vivo:relates ?doc, d:test.
?auth2 a vivo:Authorship .
?auth2 vivo:relates ?doc, ?per2 .
?per2 a foaf:Person .
?per2 rdfs:label "W.T. Schaffer" .
?auth3 a vivo:Authorship .
?auth3 vivo:relates ?doc, ?per2 .
?per3 a foaf:Person .
?per3 rdfs:label "D.K. Ginther" .
?auth4 a vivo:Authorship .
?auth4 vivo:relates ?doc, ?per2 .
?per4 a foaf:Person .
?per4 rdfs:label "R. Kington" .
}
""")))
def test_no_orcid_authors(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?"
},
"subtitle": null,
"translated-title": null
},
"type": "JOURNAL_ARTICLE",
"contributors": {
"contributor": []
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?auth1 a vivo:Authorship .
?auth1 vivo:relates ?doc, d:test.
}
""")))
def test_null_credit_name(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?"
},
"subtitle": null,
"translated-title": null
},
"type": "JOURNAL_ARTICLE",
"contributors": {
"contributor": [
{
"contributor-orcid": null,
"credit-name": null,
"contributor-email": null,
"contributor-attributes": null
}
]
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?auth1 a vivo:Authorship .
?auth1 vivo:relates ?doc, d:test.
}
""")))
def test_orcid_authors_not_reversed(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?"
},
"subtitle": null,
"translated-title": null
},
"type": "JOURNAL_ARTICLE",
"contributors": {
"contributor": [
{
"contributor-orcid": null,
"credit-name": {
"value": "D.K. Ginther"
},
"contributor-email": null,
"contributor-attributes": null
},
{
"contributor-orcid": null,
"credit-name": {
"value": "L.L. Haak"
},
"contributor-email": null,
"contributor-attributes": null
},
{
"contributor-orcid": null,
"credit-name": {
"value": "W.T. Schaffer"
},
"contributor-email": null,
"contributor-attributes": null
},
{
"contributor-orcid": null,
"credit-name": {
"value": "R. Kington"
},
"contributor-email": null,
"contributor-attributes": null
}
]
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?auth1 a vivo:Authorship .
?auth1 vivo:relates ?doc, d:test.
?auth2 a vivo:Authorship .
?auth2 vivo:relates ?doc, ?per2 .
?per2 a foaf:Person .
?per2 rdfs:label "W. T. Schaffer" .
?auth3 a vivo:Authorship .
?auth3 vivo:relates ?doc, ?per2 .
?per3 a foaf:Person .
?per3 rdfs:label "D. K. Ginther" .
?auth4 a vivo:Authorship .
?auth4 vivo:relates ?doc, ?per2 .
?per4 a foaf:Person .
?per4 rdfs:label "R. Kington" .
}
""")))
def test_bibtex_authors_reversed(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?"
},
"subtitle": null,
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@article{Haak2012,title = {Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?},journal = {Academic Medicine},year = {2012},volume = {87},number = {11},pages = {1516-1524},author = {Ginther, D.K. and Haak, L.L. and Schaffer, W.T. and Kington, R.}}"
},
"type": "JOURNAL_ARTICLE"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?auth1 a vivo:Authorship .
?auth1 vivo:relates ?doc, d:test.
?auth2 a vivo:Authorship .
?auth2 vivo:relates ?doc, ?per2 .
?per2 a foaf:Person .
?per2 rdfs:label "W.T. Schaffer" .
?auth3 a vivo:Authorship .
?auth3 vivo:relates ?doc, ?per2 .
?per3 a foaf:Person .
?per3 rdfs:label "D.K. Ginther" .
?auth4 a vivo:Authorship .
?auth4 vivo:relates ?doc, ?per2 .
?per4 a foaf:Person .
?per4 rdfs:label "R. Kington" .
}
""")))
def test_bibtex_authors_not_reversed(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?"
},
"subtitle": null,
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@article{Haak2012,title = {Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?},journal = {Academic Medicine},year = {2012},volume = {87},number = {11},pages = {1516-1524},author = {D.K. Ginther and L.L. Haak and W.T. Schaffer and R. Kington}}"
},
"type": "JOURNAL_ARTICLE"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?auth1 a vivo:Authorship .
?auth1 vivo:relates ?doc, d:test.
?auth2 a vivo:Authorship .
?auth2 vivo:relates ?doc, ?per2 .
?per2 a foaf:Person .
?per2 rdfs:label "W. T. Schaffer" .
?auth3 a vivo:Authorship .
?auth3 vivo:relates ?doc, ?per2 .
?per3 a foaf:Person .
?per3 rdfs:label "D. K. Ginther" .
?auth4 a vivo:Authorship .
?auth4 vivo:relates ?doc, ?per2 .
?per4 a foaf:Person .
?per4 rdfs:label "R. Kington" .
}
""")))
def test_bibtex_editor(self):
work_profile = json.loads("""
{
"path": "/0000-0002-9446-437X/work/35153466",
"title": {
"title": {
"value": "Numerical Methods for Multi-term Fractional Boundary Value Problems"
},
"subtitle": null,
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@incollection{\\nyear={2013},\\nisbn={978-1-4614-7332-9},\\nbooktitle={Differential and Difference Equations with Applications},\\nvolume={47},\\nseries={Springer Proceedings in Mathematics & Statistics},\\neditor={Pinelas, Sandra and Chipot, Michel and Dosla, Zuzana},\\ndoi={10.1007/978-1-4614-7333-6_48},\\ntitle={Numerical Methods for Multi-term Fractional Boundary Value Problems},\\nurl={http://dx.doi.org/10.1007/978-1-4614-7333-6_48},\\npublisher={Springer New York},\\npages={535-542},\\nlanguage={English}\\n}\\n"
},
"type": "BOOK_CHAPTER"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Ford", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Chapter .
?edit1 a vivo:Editorship .
?edit1 vivo:relates ?doc, ?per1 .
?per1 a foaf:Person .
?per1 rdfs:label "Michel Chipot" .
?edit2 a vivo:Editorship .
?edit2 vivo:relates ?doc, ?per2 .
?per2 a foaf:Person .
?per2 rdfs:label "Zuzana Dosla" .
?edit3 a vivo:Editorship .
?edit3 vivo:relates ?doc, ?per3 .
?per3 a foaf:Person .
?per3 rdfs:label "Sandra Pinelas" .
}
""")))
def test_orcid_editor(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5014-7658/work/11269935",
"title": {
"title": {
"value": "Media and Sports/Media e Desporto"
},
"subtitle": null,
"translated-title": null
},
"type": "BOOK",
"contributors": {
"contributor": [
{
"contributor-orcid": null,
"credit-name": null,
"contributor-email": null,
"contributor-attributes": {
"contributor-sequence": "FIRST",
"contributor-role": "EDITOR"
}
}
]
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Almeida", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Book .
?edit1 a vivo:Editorship .
?edit1 vivo:relates ?doc, d:test .
}
""")))
def test_external_identifier(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?"
},
"subtitle": null,
"translated-title": null
},
"type": "JOURNAL_ARTICLE",
"external-ids": {
"external-id": [
{
"external-id-type": "doi",
"external-id-value": "10.1097/ACM.0b013e31826d726b",
"external-id-url": null,
"external-id-relationship": "SELF"
},
{
"external-id-type": "eid",
"external-id-value": "2-s2.0-84869886841",
"external-id-url": null,
"external-id-relationship": "SELF"
}
]
}
}
""")
WorksCrosswalk._fetch_crossref_doi = staticmethod(lambda doi: json.loads("""
{
}
"""))
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc bibo:doi "10.1097/ACM.0b013e31826d726b" .
?vc a vcard:Kind .
?doc obo:ARG_2000028 ?vc .
?vc vcard:hasURL ?vcurl .
?vcurl a vcard:URL .
?vcurl vcard:url "http://dx.doi.org/10.1097/ACM.0b013e31826d726b"^^xsd:anyURI .
}
""")))
def test_isbn13(self):
work_profile = json.loads("""
{
"path": "/0000-0001-9002-015X/work/13926662",
"title": {
"title": {
"value": "Estrategias de frontera desde la interculturalidad."
},
"subtitle": {
"value": "El caso del we tripantü mapuche hoy"
},
"translated-title": null
},
"journal-title": {
"value": "Actas del XIII Congreso de Antropología de la Federación de Asociaciones de Antropología del Estado Español: Periferias, Fronteras y Diálogos. "
},
"type": "CONFERENCE_PAPER",
"external-ids": {
"external-id": [
{
"external-id-type": "isbn",
"external-id-value": "978-84-697-0505-6",
"external-id-url": {
"value": ""
},
"external-id-relationship": "PART_OF"
}
]
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Milesi", self.graph)
self.assertTrue(list(self.graph[: ns.BIBO["isbn13"]: Literal("978-84-697-0505-6")]))
def test_isbn10(self):
work_profile = json.loads("""
{
"path": "/0000-0001-9002-015X/work/13926662",
"title": {
"title": {
"value": "Estrategias de frontera desde la interculturalidad."
},
"subtitle": {
"value": "El caso del we tripantü mapuche hoy"
},
"translated-title": null
},
"journal-title": {
"value": "Actas del XIII Congreso de Antropología de la Federación de Asociaciones de Antropología del Estado Español: Periferias, Fronteras y Diálogos. "
},
"type": "CONFERENCE_PAPER",
"external-ids": {
"external-id": [
{
"external-id-type": "isbn",
"external-id-value": "978-84-697-05",
"external-id-url": {
"value": ""
},
"external-id-relationship": "PART_OF"
}
]
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Milesi", self.graph)
self.assertTrue(list(self.graph[: ns.BIBO["isbn10"]: Literal("978-84-697-05")]))
def test_bibtex_doi(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5014-4975/work/13266925",
"title": {
"title": {
"value": "Time to React"
},
"subtitle": {
"value": "The Efficiency of International Organizations in Crisis Response"
},
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@book{Hardt_2014,doi = {10.1093/acprof:oso/9780199337118.001.0001},url = {http://dx.doi.org/10.1093/acprof:oso/9780199337118.001.0001},year = 2014,month = {Feb},publisher = {Oxford University Press},author = {Heidi Hardt},title = {Time to React}}"
},
"type": "BOOK"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Hardt", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Book .
?doc bibo:doi "10.1093/acprof:oso/9780199337118.001.0001" .
?vc a vcard:Kind .
?doc obo:ARG_2000028 ?vc .
?vc vcard:hasURL ?vcurl .
?vcurl a vcard:URL .
?vcurl vcard:url "http://dx.doi.org/10.1093/acprof:oso/9780199337118.001.0001"^^xsd:anyURI .
}
""")))
def test_bibtex_isbn(self):
work_profile = json.loads("""
{
"title": {
"title": {
"value": "Numerical methods for multi-term fractional boundary value problems, Differential and Difference Equations with Applications"
},
"subtitle": null,
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@incollection{\\nyear={2013},\\nisbn={978-1-4614-7332-9},\\nbooktitle={Differential and Difference Equations with Applications},\\nvolume={47},\\nseries={Springer Proceedings in Mathematics & Statistics},\\neditor={Pinelas, Sandra and Chipot, Michel and Dosla, Zuzana},\\ndoi={10.1007/978-1-4614-7333-6_48},\\ntitle={Numerical Methods for Multi-term Fractional Boundary Value Problems},\\nurl={http://dx.doi.org/10.1007/978-1-4614-7333-6_48},\\npublisher={Springer New York},\\npages={535-542},\\nlanguage={English}\\n}\\n"
},
"type": "BOOK_CHAPTER"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Ford", self.graph)
self.assertTrue(list(self.graph[: ns.BIBO["isbn13"]: Literal("978-1-4614-7332-9")]))
def test_orcid_url(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5006-1520/work/13637092",
"title": {
"title": {
"value": "Popular Music and Israeli National Culture"
},
"subtitle": null,
"translated-title": null
},
"type": "BOOK",
"url": {
"value": "http://www.ucpress.edu/book.php?isbn=9780520236547"
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Seroussi", self.graph)
self.assertTrue(list(self.graph[: ns.VCARD["url"]: ]))
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Book .
?vc a vcard:Kind .
?doc obo:ARG_2000028 ?vc .
?vc vcard:hasURL ?vcurl .
?vcurl a vcard:URL .
?vcurl vcard:url "http://www.ucpress.edu/book.php?isbn=9780520236547"^^xsd:anyURI .
}
""")))
def test_ignore_orcid_url(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5014-4975/work/13266925",
"title": {
"title": {
"value": "Time to React"
},
"subtitle": {
"value": "The Efficiency of International Organizations in Crisis Response"
},
"translated-title": null
},
"type": "BOOK",
"url": {
"value": "http://dx.doi.org/10.1093/acprof:oso/9780199337118.001.0001"
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Hardt", self.graph)
self.assertFalse(list(self.graph[: ns.VCARD["url"]: ]))
def test_bibtex_url(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5014-4975/work/13266925",
"title": {
"title": {
"value": "Time to React"
},
"subtitle": {
"value": "The Efficiency of International Organizations in Crisis Response"
},
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@book{Hardt_2014,doi = {10.1093/acprof:oso/9780199337118.001.0001},url = {http://www.ucpress.edu/book.php?isbn=9780520236547},year = 2014,month = {Feb},publisher = {Oxford University Press},author = {Heidi Hardt},title = {Time to React}}"
},
"type": "BOOK"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Hardt", self.graph)
self.assertTrue(list(self.graph[: ns.VCARD["url"]:]))
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Book .
?vc a vcard:Kind .
?doc obo:ARG_2000028 ?vc .
?vc vcard:hasURL ?vcurl .
?vcurl a vcard:URL .
?vcurl vcard:url "http://www.ucpress.edu/book.php?isbn=9780520236547"^^xsd:anyURI .
}
""")))
def test_ignore_bibtex_url(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5014-4975/work/13266925",
"title": {
"title": {
"value": "Time to React"
},
"subtitle": {
"value": "The Efficiency of International Organizations in Crisis Response"
},
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@book{Hardt_2014,url = {http://dx.doi.org/10.1093/acprof:oso/9780199337118.001.0001},year = 2014,month = {Feb},publisher = {Oxford University Press},author = {Heidi Hardt},title = {Time to React}}"
},
"type": "BOOK"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Hardt", self.graph)
self.assertFalse(list(self.graph[: ns.VCARD["url"]:]))
def test_bibtex_journal(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?"
},
"subtitle": null,
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@article { haak2012,title = {Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?},journal = {Academic Medicine},issn = {1040-2446},year = {2012},volume = {87},number = {11},pages = {1516-1524},author = {Ginther, D.K. and Haak, L.L. and Schaffer, W.T. and Kington, R.}}"
},
"type": "JOURNAL_ARTICLE"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc vivo:hasPublicationVenue ?jrnl .
?jrnl a bibo:Journal .
?jrnl rdfs:label "Academic Medicine" .
?jrnl bibo:issn "1040-2446" .
}
""")))
def test_crossref_journal(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?"
},
"subtitle": null,
"translated-title": null
},
"type": "JOURNAL_ARTICLE",
"external-ids": {
"external-id": [
{
"external-id-type": "doi",
"external-id-value": "10.1097/ACM.0b013e31826d726b",
"external-id-url": null,
"external-id-relationship": "SELF"
},
{
"external-id-type": "eid",
"external-id-value": "2-s2.0-84869886841",
"external-id-url": null,
"external-id-relationship": "SELF"
}
]
}
}
""")
WorksCrosswalk._fetch_crossref_doi = staticmethod(lambda doi: json.loads("""
{
"container-title": ["Academic Medicine", "Acad. Med."],
"ISSN": ["1040-2446", "1938-808X"]
}
"""))
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc vivo:hasPublicationVenue ?jrnl .
?jrnl a bibo:Journal .
?jrnl rdfs:label "Academic Medicine" .
?jrnl bibo:issn "1040-2446" .
?jrnl bibo:issn "1938-808X" .
}
""")))
def test_orcid_journal(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5109-3700/work/15643384",
"title": {
"title": {
"value": "Are race, ethnicity, and medical school affiliation associated with NIH R01 type 1 award probability for physician investigators?"
},
"subtitle": null,
"translated-title": null
},
"journal-title": {
"value": "Academic Medicine"
},
"type": "JOURNAL_ARTICLE"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Haak", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:AcademicArticle .
?doc vivo:hasPublicationVenue ?jrnl .
?jrnl a bibo:Journal .
?jrnl rdfs:label "Academic Medicine" .
}
""")))
def test_journal_issue(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5138-9384/work/10341578",
"title": {
"title": {
"value": "Egon Bittner"
},
"subtitle": {
"value": "Phenomenology in Action"
},
"translated-title": null
},
"journal-title": {
"value": "Ethnographic Studies"
},
"short-description": "Special Memorial Issue",
"type": "JOURNAL_ISSUE",
"publication-date": {
"year": {
"value": "2013"
},
"month": {
"value": "07"
},
"day": {
"value": "01"
},
"media-type": null
},
"external-ids": {
"external-id": [
{
"external-id-type": "issn",
"external-id-value": "1366-4964",
"external-id-url": null,
"external-id-relationship": "SELF"
}
]
},
"url": {
"value": "http://www.zhbluzern.ch/index.php?id=2583"
},
"contributors": {
"contributor": [
{
"contributor-orcid": {
"uri": null,
"path": null,
"host": null
},
"credit-name": {
"value": "Andrew Carlin"
},
"contributor-email": null,
"contributor-attributes": {
"contributor-sequence": "FIRST",
"contributor-role": "EDITOR"
}
}
]
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Carlin", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Issue .
?doc rdfs:label "Egon Bittner: Phenomenology in Action" .
?doc vivo:dateTimeValue ?dtv .
?doc bibo:issn "1366-4964" .
?dtv a vivo:DateTimeValue .
?dtv rdfs:label "July 1, 2013" .
?doc obo:ARG_2000028 ?vcard .
?vcard vcard:hasURL ?vcardurl .
?vcardurl vcard:url "http://www.zhbluzern.ch/index.php?id=2583"^^xsd:anyURI .
?ed a vivo:Editorship .
?ed vivo:relates d:test, ?doc .
}
""")))
def test_magazine(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5019-3929/work/13302713",
"title": {
"title": {
"value": "Software as a Communication Platform"
},
"subtitle": {
"value": ""
},
"translated-title": null
},
"journal-title": {
"value": "Kunststoffe international 2009/11"
},
"type": "MAGAZINE_ARTICLE"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Peinado", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Article .
?doc vivo:hasPublicationVenue ?jrnl .
?jrnl a bibo:Magazine .
?jrnl rdfs:label "Kunststoffe international 2009/11" .
}
""")))
def test_translation_with_bibtex(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5621-5463/work/14402996",
"title": {
"title": {
"value": "Ambiguità nelle risposte al Position Generator"
},
"subtitle": null,
"translated-title": null
},
"journal-title": {
"value": "SOCIOLOGIA E POLITICHE SOCIALI"
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@article{sciandra2012, author= {SCIANDRA A.}, doi= {10.3280/SP2012-002006}, issn= {1591-2027}, journal= {SOCIOLOGIA E POLITICHE SOCIALI}, pages= {113--141}, title= {Ambiguita nelle risposte al Position Generator}, url= {http://dx.medra.org/10.3280/SP2012-002006}, volume= {15}, year= {2012}}"
},
"type": "TRANSLATION"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Sciandra", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Article .
?doc rdfs:label "Ambiguita nelle risposte al Position Generator" .
d:test bibo:translator ?doc .
filter not exists {
?contr a vivo:Authorship .
?contr vivo:relates ?doc, d:test .
}
}
""")))
def test_translation(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5785-6342/work/11161159",
"title": {
"title": {
"value": "Aristóteles, Retórica, Obras Completas de Aristóteles"
},
"subtitle": {
"value": "com a colaboração de Paulo Farmhouse Alberto e Abel Nascimento Pena."
},
"translated-title": null
},
"journal-title": {
"value": "São Paulo, WMF Martins Fontes, 2012."
},
"type": "TRANSLATION",
"contributors": {
"contributor": [
{
"contributor-orcid": null,
"credit-name": null,
"contributor-email": null,
"contributor-attributes": {
"contributor-sequence": null,
"contributor-role": "CHAIR_OR_TRANSLATOR"
}
}
]
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Manuel", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Document .
?doc rdfs:label "Aristóteles, Retórica, Obras Completas de Aristóteles: com a colaboração de Paulo Farmhouse Alberto e Abel Nascimento Pena." .
d:test bibo:translator ?doc .
filter not exists {
?contr a vivo:Authorship .
?contr vivo:relates ?doc, d:test .
}
}
""")))
def test_conference_paper(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5004-4536/work/14890522",
"title": {
"title": {
"value": "Noise in the InAlN/GaN HEMT transistors"
},
"subtitle": null,
"translated-title": null
},
"citation": {
"citation-type": "BIBTEX",
"citation-value": "@article { atka2010,title = {Noise in the InAlN/GaN HEMT transistors},journal = {Conference Proceedings - The 8th International Conference on Advanced Semiconductor Devices and Microsystems, ASDAM 2010},year = {2010},pages = {53-56},author = {Rendek, K. and Šatka, A. and Kováč, J. and Donoval, D.}}"
},
"type": "CONFERENCE_PAPER"
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Sitka", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a vivo:ConferencePaper .
?doc rdfs:label "Noise in the InAlN/GaN HEMT transistors" .
?doc vivo:hasPublicationVenue ?proc .
?proc a bibo:Proceedings .
?proc rdfs:label "Conference Proceedings - The 8th International Conference on Advanced Semiconductor Devices and Microsystems, ASDAM 2010" .
}
""")))
def test_patent(self):
work_profile = json.loads("""
{
"path": "/0000-0001-5061-3743/work/11259988",
"title": {
"title": {
"value": "TREATMENT OF INFECTIONS BY CARBON MONOXIDE"
},
"subtitle": null,
"translated-title": null
},
"type": "PATENT",
"external-ids": {
"external-id": [
{
"external-id-type": "other-id",
"external-id-value": "US2010196516 (A1)",
"external-id-url": {
"value": ""
},
"external-id-relationship": "SELF"
}
]
},
"url": {
"value": "http://worldwide.espacenet.com/publicationDetails/biblio?II=5&ND=3&adjacent=true&locale=en_EP&FT=D&date=20100805&CC=US&NR=2010196516A1&KC=A1"
},
"contributors": {
"contributor": [
{
"contributor-orcid": null,
"credit-name": null,
"contributor-email": null,
"contributor-attributes": {
"contributor-sequence": null,
"contributor-role": "CO_INVENTOR"
}
}
]
}
}
""")
self.crosswalker.crosswalk_work(work_profile, self.person_uri, "Romão", self.graph)
self.assertTrue(bool(self.graph.query("""
ask where {
?doc a bibo:Patent .
?doc rdfs:label "TREATMENT OF INFECTIONS BY CARBON MONOXIDE" .
d:test vivo:assigneeFor ?doc .
?doc vivo:patentNumber "US2010196516 (A1)" .
}
""")))
| 34.472496 | 546 | 0.54804 | 6,114 | 59,534 | 5.268891 | 0.109585 | 0.028497 | 0.01937 | 0.028559 | 0.84426 | 0.834792 | 0.814956 | 0.798131 | 0.77516 | 0.746477 | 0 | 0.065067 | 0.306867 | 59,534 | 1,726 | 547 | 34.492468 | 0.715594 | 0.003359 | 0 | 0.583801 | 0 | 0.031776 | 0.78355 | 0.08628 | 0 | 0 | 0 | 0 | 0.034268 | 1 | 0.029283 | false | 0 | 0.004984 | 0 | 0.034891 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
be1d17b5b494974afbc201ea127e3f4f5859f88b | 20,632 | py | Python | sdk/python/pulumi_aws/datasync/task.py | aamir-locus/pulumi-aws | 3e234b050129bde35d8e072a88bd608562f02142 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/datasync/task.py | aamir-locus/pulumi-aws | 3e234b050129bde35d8e072a88bd608562f02142 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/datasync/task.py | aamir-locus/pulumi-aws | 3e234b050129bde35d8e072a88bd608562f02142 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['TaskArgs', 'Task']
@pulumi.input_type
class TaskArgs:
def __init__(__self__, *,
destination_location_arn: pulumi.Input[str],
source_location_arn: pulumi.Input[str],
cloudwatch_log_group_arn: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
options: Optional[pulumi.Input['TaskOptionsArgs']] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a Task resource.
:param pulumi.Input[str] destination_location_arn: Amazon Resource Name (ARN) of destination DataSync Location.
:param pulumi.Input[str] source_location_arn: Amazon Resource Name (ARN) of source DataSync Location.
:param pulumi.Input[str] cloudwatch_log_group_arn: Amazon Resource Name (ARN) of the CloudWatch Log Group that is used to monitor and log events in the sync task.
:param pulumi.Input[str] name: Name of the DataSync Task.
:param pulumi.Input['TaskOptionsArgs'] options: Configuration block containing option that controls the default behavior when you start an execution of this DataSync Task. For each individual task execution, you can override these options by specifying an overriding configuration in those executions.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value pairs of resource tags to assign to the DataSync Task.
"""
pulumi.set(__self__, "destination_location_arn", destination_location_arn)
pulumi.set(__self__, "source_location_arn", source_location_arn)
if cloudwatch_log_group_arn is not None:
pulumi.set(__self__, "cloudwatch_log_group_arn", cloudwatch_log_group_arn)
if name is not None:
pulumi.set(__self__, "name", name)
if options is not None:
pulumi.set(__self__, "options", options)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="destinationLocationArn")
def destination_location_arn(self) -> pulumi.Input[str]:
"""
Amazon Resource Name (ARN) of destination DataSync Location.
"""
return pulumi.get(self, "destination_location_arn")
@destination_location_arn.setter
def destination_location_arn(self, value: pulumi.Input[str]):
pulumi.set(self, "destination_location_arn", value)
@property
@pulumi.getter(name="sourceLocationArn")
def source_location_arn(self) -> pulumi.Input[str]:
"""
Amazon Resource Name (ARN) of source DataSync Location.
"""
return pulumi.get(self, "source_location_arn")
@source_location_arn.setter
def source_location_arn(self, value: pulumi.Input[str]):
pulumi.set(self, "source_location_arn", value)
@property
@pulumi.getter(name="cloudwatchLogGroupArn")
def cloudwatch_log_group_arn(self) -> Optional[pulumi.Input[str]]:
"""
Amazon Resource Name (ARN) of the CloudWatch Log Group that is used to monitor and log events in the sync task.
"""
return pulumi.get(self, "cloudwatch_log_group_arn")
@cloudwatch_log_group_arn.setter
def cloudwatch_log_group_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cloudwatch_log_group_arn", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the DataSync Task.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def options(self) -> Optional[pulumi.Input['TaskOptionsArgs']]:
"""
Configuration block containing option that controls the default behavior when you start an execution of this DataSync Task. For each individual task execution, you can override these options by specifying an overriding configuration in those executions.
"""
return pulumi.get(self, "options")
@options.setter
def options(self, value: Optional[pulumi.Input['TaskOptionsArgs']]):
pulumi.set(self, "options", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Key-value pairs of resource tags to assign to the DataSync Task.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@pulumi.input_type
class _TaskState:
def __init__(__self__, *,
arn: Optional[pulumi.Input[str]] = None,
cloudwatch_log_group_arn: Optional[pulumi.Input[str]] = None,
destination_location_arn: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
options: Optional[pulumi.Input['TaskOptionsArgs']] = None,
source_location_arn: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering Task resources.
:param pulumi.Input[str] arn: Amazon Resource Name (ARN) of the DataSync Task.
:param pulumi.Input[str] cloudwatch_log_group_arn: Amazon Resource Name (ARN) of the CloudWatch Log Group that is used to monitor and log events in the sync task.
:param pulumi.Input[str] destination_location_arn: Amazon Resource Name (ARN) of destination DataSync Location.
:param pulumi.Input[str] name: Name of the DataSync Task.
:param pulumi.Input['TaskOptionsArgs'] options: Configuration block containing option that controls the default behavior when you start an execution of this DataSync Task. For each individual task execution, you can override these options by specifying an overriding configuration in those executions.
:param pulumi.Input[str] source_location_arn: Amazon Resource Name (ARN) of source DataSync Location.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value pairs of resource tags to assign to the DataSync Task.
"""
if arn is not None:
pulumi.set(__self__, "arn", arn)
if cloudwatch_log_group_arn is not None:
pulumi.set(__self__, "cloudwatch_log_group_arn", cloudwatch_log_group_arn)
if destination_location_arn is not None:
pulumi.set(__self__, "destination_location_arn", destination_location_arn)
if name is not None:
pulumi.set(__self__, "name", name)
if options is not None:
pulumi.set(__self__, "options", options)
if source_location_arn is not None:
pulumi.set(__self__, "source_location_arn", source_location_arn)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def arn(self) -> Optional[pulumi.Input[str]]:
"""
Amazon Resource Name (ARN) of the DataSync Task.
"""
return pulumi.get(self, "arn")
@arn.setter
def arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "arn", value)
@property
@pulumi.getter(name="cloudwatchLogGroupArn")
def cloudwatch_log_group_arn(self) -> Optional[pulumi.Input[str]]:
"""
Amazon Resource Name (ARN) of the CloudWatch Log Group that is used to monitor and log events in the sync task.
"""
return pulumi.get(self, "cloudwatch_log_group_arn")
@cloudwatch_log_group_arn.setter
def cloudwatch_log_group_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cloudwatch_log_group_arn", value)
@property
@pulumi.getter(name="destinationLocationArn")
def destination_location_arn(self) -> Optional[pulumi.Input[str]]:
"""
Amazon Resource Name (ARN) of destination DataSync Location.
"""
return pulumi.get(self, "destination_location_arn")
@destination_location_arn.setter
def destination_location_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "destination_location_arn", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the DataSync Task.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def options(self) -> Optional[pulumi.Input['TaskOptionsArgs']]:
"""
Configuration block containing option that controls the default behavior when you start an execution of this DataSync Task. For each individual task execution, you can override these options by specifying an overriding configuration in those executions.
"""
return pulumi.get(self, "options")
@options.setter
def options(self, value: Optional[pulumi.Input['TaskOptionsArgs']]):
pulumi.set(self, "options", value)
@property
@pulumi.getter(name="sourceLocationArn")
def source_location_arn(self) -> Optional[pulumi.Input[str]]:
"""
Amazon Resource Name (ARN) of source DataSync Location.
"""
return pulumi.get(self, "source_location_arn")
@source_location_arn.setter
def source_location_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_location_arn", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Key-value pairs of resource tags to assign to the DataSync Task.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
class Task(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
cloudwatch_log_group_arn: Optional[pulumi.Input[str]] = None,
destination_location_arn: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
options: Optional[pulumi.Input[pulumi.InputType['TaskOptionsArgs']]] = None,
source_location_arn: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
"""
Manages an AWS DataSync Task, which represents a configuration for synchronization. Starting an execution of these DataSync Tasks (actually synchronizing files) is performed outside of this resource.
## Import
`aws_datasync_task` can be imported by using the DataSync Task Amazon Resource Name (ARN), e.g.
```sh
$ pulumi import aws:datasync/task:Task example arn:aws:datasync:us-east-1:123456789012:task/task-12345678901234567
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] cloudwatch_log_group_arn: Amazon Resource Name (ARN) of the CloudWatch Log Group that is used to monitor and log events in the sync task.
:param pulumi.Input[str] destination_location_arn: Amazon Resource Name (ARN) of destination DataSync Location.
:param pulumi.Input[str] name: Name of the DataSync Task.
:param pulumi.Input[pulumi.InputType['TaskOptionsArgs']] options: Configuration block containing option that controls the default behavior when you start an execution of this DataSync Task. For each individual task execution, you can override these options by specifying an overriding configuration in those executions.
:param pulumi.Input[str] source_location_arn: Amazon Resource Name (ARN) of source DataSync Location.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value pairs of resource tags to assign to the DataSync Task.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: TaskArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages an AWS DataSync Task, which represents a configuration for synchronization. Starting an execution of these DataSync Tasks (actually synchronizing files) is performed outside of this resource.
## Import
`aws_datasync_task` can be imported by using the DataSync Task Amazon Resource Name (ARN), e.g.
```sh
$ pulumi import aws:datasync/task:Task example arn:aws:datasync:us-east-1:123456789012:task/task-12345678901234567
```
:param str resource_name: The name of the resource.
:param TaskArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(TaskArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
cloudwatch_log_group_arn: Optional[pulumi.Input[str]] = None,
destination_location_arn: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
options: Optional[pulumi.Input[pulumi.InputType['TaskOptionsArgs']]] = None,
source_location_arn: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = TaskArgs.__new__(TaskArgs)
__props__.__dict__["cloudwatch_log_group_arn"] = cloudwatch_log_group_arn
if destination_location_arn is None and not opts.urn:
raise TypeError("Missing required property 'destination_location_arn'")
__props__.__dict__["destination_location_arn"] = destination_location_arn
__props__.__dict__["name"] = name
__props__.__dict__["options"] = options
if source_location_arn is None and not opts.urn:
raise TypeError("Missing required property 'source_location_arn'")
__props__.__dict__["source_location_arn"] = source_location_arn
__props__.__dict__["tags"] = tags
__props__.__dict__["arn"] = None
super(Task, __self__).__init__(
'aws:datasync/task:Task',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
arn: Optional[pulumi.Input[str]] = None,
cloudwatch_log_group_arn: Optional[pulumi.Input[str]] = None,
destination_location_arn: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
options: Optional[pulumi.Input[pulumi.InputType['TaskOptionsArgs']]] = None,
source_location_arn: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None) -> 'Task':
"""
Get an existing Task resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] arn: Amazon Resource Name (ARN) of the DataSync Task.
:param pulumi.Input[str] cloudwatch_log_group_arn: Amazon Resource Name (ARN) of the CloudWatch Log Group that is used to monitor and log events in the sync task.
:param pulumi.Input[str] destination_location_arn: Amazon Resource Name (ARN) of destination DataSync Location.
:param pulumi.Input[str] name: Name of the DataSync Task.
:param pulumi.Input[pulumi.InputType['TaskOptionsArgs']] options: Configuration block containing option that controls the default behavior when you start an execution of this DataSync Task. For each individual task execution, you can override these options by specifying an overriding configuration in those executions.
:param pulumi.Input[str] source_location_arn: Amazon Resource Name (ARN) of source DataSync Location.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value pairs of resource tags to assign to the DataSync Task.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _TaskState.__new__(_TaskState)
__props__.__dict__["arn"] = arn
__props__.__dict__["cloudwatch_log_group_arn"] = cloudwatch_log_group_arn
__props__.__dict__["destination_location_arn"] = destination_location_arn
__props__.__dict__["name"] = name
__props__.__dict__["options"] = options
__props__.__dict__["source_location_arn"] = source_location_arn
__props__.__dict__["tags"] = tags
return Task(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def arn(self) -> pulumi.Output[str]:
"""
Amazon Resource Name (ARN) of the DataSync Task.
"""
return pulumi.get(self, "arn")
@property
@pulumi.getter(name="cloudwatchLogGroupArn")
def cloudwatch_log_group_arn(self) -> pulumi.Output[Optional[str]]:
"""
Amazon Resource Name (ARN) of the CloudWatch Log Group that is used to monitor and log events in the sync task.
"""
return pulumi.get(self, "cloudwatch_log_group_arn")
@property
@pulumi.getter(name="destinationLocationArn")
def destination_location_arn(self) -> pulumi.Output[str]:
"""
Amazon Resource Name (ARN) of destination DataSync Location.
"""
return pulumi.get(self, "destination_location_arn")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Name of the DataSync Task.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def options(self) -> pulumi.Output[Optional['outputs.TaskOptions']]:
"""
Configuration block containing option that controls the default behavior when you start an execution of this DataSync Task. For each individual task execution, you can override these options by specifying an overriding configuration in those executions.
"""
return pulumi.get(self, "options")
@property
@pulumi.getter(name="sourceLocationArn")
def source_location_arn(self) -> pulumi.Output[str]:
"""
Amazon Resource Name (ARN) of source DataSync Location.
"""
return pulumi.get(self, "source_location_arn")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
Key-value pairs of resource tags to assign to the DataSync Task.
"""
return pulumi.get(self, "tags")
| 47.87007 | 327 | 0.670269 | 2,501 | 20,632 | 5.315474 | 0.07517 | 0.083572 | 0.076877 | 0.056266 | 0.878667 | 0.864074 | 0.849556 | 0.839627 | 0.83376 | 0.824056 | 0 | 0.003861 | 0.234296 | 20,632 | 430 | 328 | 47.981395 | 0.837638 | 0.352656 | 0 | 0.710317 | 1 | 0 | 0.11146 | 0.050737 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15873 | false | 0.003968 | 0.027778 | 0 | 0.281746 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
be1f2c8dbfc018d1971fb64fb58b676f3e4bace5 | 3,210 | py | Python | tests/test_update_file.py | uilianries/conan-clang-update | c52e514bb4fb0124348541e0808cef7ca59c5dca | [
"MIT"
] | null | null | null | tests/test_update_file.py | uilianries/conan-clang-update | c52e514bb4fb0124348541e0808cef7ca59c5dca | [
"MIT"
] | 2 | 2018-05-07T01:43:43.000Z | 2019-10-03T01:12:52.000Z | tests/test_update_file.py | uilianries/conan-clang-update | c52e514bb4fb0124348541e0808cef7ca59c5dca | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from conan_clang_update.conan_clang_update import Command
import tempfile
import filecmp
TRAVIS_FILE = """linux: &linux
os: linux
sudo: required
language: python
python: "3.6"
services:
- docker
osx: &osx
os: osx
language: generic
matrix:
include:
- <<: *linux
env: CONAN_GCC_VERSIONS=4.9 CONAN_DOCKER_IMAGE=lasote/conangcc49
- <<: *linux
env: CONAN_GCC_VERSIONS=5 CONAN_DOCKER_IMAGE=lasote/conangcc5
- <<: *linux
env: CONAN_GCC_VERSIONS=6 CONAN_DOCKER_IMAGE=lasote/conangcc6
- <<: *linux
env: CONAN_GCC_VERSIONS=7 CONAN_DOCKER_IMAGE=lasote/conangcc7
- <<: *linux
env: CONAN_CLANG_VERSIONS=3.9 CONAN_DOCKER_IMAGE=lasote/conanclang39
- <<: *linux
env: CONAN_CLANG_VERSIONS=4.0 CONAN_DOCKER_IMAGE=lasote/conanclang40
- <<: *linux
env: CONAN_CLANG_VERSIONS=5.0 CONAN_DOCKER_IMAGE=lasote/conanclang50
- <<: *osx
osx_image: xcode7.3
env: CONAN_APPLE_CLANG_VERSIONS=7.3
- <<: *osx
osx_image: xcode8.3
env: CONAN_APPLE_CLANG_VERSIONS=8.1
- <<: *osx
osx_image: xcode9
env: CONAN_APPLE_CLANG_VERSIONS=9.0
install:
- chmod +x .travis/install.sh
- ./.travis/install.sh
script:
- chmod +x .travis/run.sh
- ./.travis/run.sh
"""
UPDATED_TRAVIS = """linux: &linux
os: linux
sudo: required
language: python
python: "3.6"
services:
- docker
osx: &osx
os: osx
language: generic
matrix:
include:
- <<: *linux
env: CONAN_GCC_VERSIONS=4.9 CONAN_DOCKER_IMAGE=lasote/conangcc49
- <<: *linux
env: CONAN_GCC_VERSIONS=5 CONAN_DOCKER_IMAGE=lasote/conangcc5
- <<: *linux
env: CONAN_GCC_VERSIONS=6 CONAN_DOCKER_IMAGE=lasote/conangcc6
- <<: *linux
env: CONAN_GCC_VERSIONS=7 CONAN_DOCKER_IMAGE=lasote/conangcc7
- <<: *linux
env: CONAN_CLANG_VERSIONS=3.9 CONAN_DOCKER_IMAGE=lasote/conanclang39
- <<: *linux
env: CONAN_CLANG_VERSIONS=4.0 CONAN_DOCKER_IMAGE=lasote/conanclang40
- <<: *linux
env: CONAN_CLANG_VERSIONS=5.0 CONAN_DOCKER_IMAGE=lasote/conanclang50
- <<: *osx
osx_image: xcode7.3
env: CONAN_APPLE_CLANG_VERSIONS=7.3
- <<: *osx
osx_image: xcode8.3
env: CONAN_APPLE_CLANG_VERSIONS=8.1
- <<: *osx
osx_image: xcode9
env: CONAN_APPLE_CLANG_VERSIONS=9.0
- <<: *osx
osx_image: xcode9.3
env: CONAN_APPLE_CLANG_VERSIONS=9.1
install:
- chmod +x .travis/install.sh
- ./.travis/install.sh
script:
- chmod +x .travis/run.sh
- ./.travis/run.sh
"""
def test_update_clang_file():
""" Create a standard travis file and update it.
"""
_, travis_path = tempfile.mkstemp(prefix='travis', suffix='.yml')
with open(travis_path, 'w') as file:
file.write(TRAVIS_FILE)
_, expected_path = tempfile.mkstemp(prefix='travis', suffix='.yml')
with open(expected_path, 'w') as file:
file.write(UPDATED_TRAVIS)
args = ['--file', travis_path]
command = Command()
command.run(args)
assert filecmp.cmp(travis_path, expected_path)
| 26.97479 | 76 | 0.642679 | 413 | 3,210 | 4.743341 | 0.191283 | 0.085758 | 0.092905 | 0.157223 | 0.828994 | 0.828994 | 0.794283 | 0.794283 | 0.794283 | 0.745278 | 0 | 0.030637 | 0.237383 | 3,210 | 118 | 77 | 27.20339 | 0.769608 | 0.02866 | 0 | 0.821782 | 0 | 0 | 0.814469 | 0.291961 | 0 | 0 | 0 | 0 | 0.009901 | 1 | 0.009901 | false | 0 | 0.029703 | 0 | 0.039604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
077ef0e8505a5a9f5e8aad17d9ce5190dcfc8f72 | 3,591 | py | Python | windows/nsist/tests/test_commands.py | andredoumad/p3env | a8850d06755d53eb6fedd9995091dad34f1f9ccd | [
"Apache-2.0"
] | null | null | null | windows/nsist/tests/test_commands.py | andredoumad/p3env | a8850d06755d53eb6fedd9995091dad34f1f9ccd | [
"Apache-2.0"
] | null | null | null | windows/nsist/tests/test_commands.py | andredoumad/p3env | a8850d06755d53eb6fedd9995091dad34f1f9ccd | [
"Apache-2.0"
] | null | null | null | import io
from testpath import assert_isfile, assert_not_path_exists
from zipfile import ZipFile
from nsist import commands, _assemble_launchers
def test_prepare_bin_dir(tmpdir):
cmds = {
'acommand': {
'entry_point': 'somemod:somefunc',
'extra_preamble': io.StringIO(u'import extra')
}
}
commands.prepare_bin_directory(tmpdir, cmds)
launcher_file = str(tmpdir / 'launcher_exe.dat')
launcher_noconsole_file = str(tmpdir / 'launcher_noconsole_exe.dat')
zip_file = str(tmpdir / 'acommand-append.zip')
zip_file_invalid = str(tmpdir / 'acommand-append-noconsole.zip')
exe_file = str(tmpdir / 'acommand.exe')
assert_isfile(launcher_file)
assert_isfile(launcher_noconsole_file)
assert_isfile(zip_file)
assert_not_path_exists(zip_file_invalid)
assert_not_path_exists(exe_file)
with open(launcher_file, 'rb') as lf:
b_launcher = lf.read()
assert b_launcher[:2] == b'MZ'
with open(launcher_noconsole_file, 'rb') as lf:
assert lf.read(2) == b'MZ'
with ZipFile(zip_file) as zf:
assert zf.testzip() is None
script_contents = zf.read('__main__.py').decode('utf-8')
assert 'import extra' in script_contents
assert 'somefunc()' in script_contents
_assemble_launchers.main(['_assemble_launchers.py', 'C:\\path\\to\\python', str(tmpdir)])
assert_isfile(exe_file)
with open(exe_file, 'rb') as ef, open(zip_file, 'rb') as zf:
b_exe = ef.read()
b_zip = zf.read()
assert b_exe[:len(b_launcher)] == b_launcher
assert b_exe[len(b_launcher):-len(b_zip)].decode('utf-8') == '#!"C:\\path\\to\\python.exe"\r\n'
assert b_exe[-len(b_zip):] == b_zip
with ZipFile(exe_file) as zf:
assert zf.testzip() is None
assert zf.read('__main__.py').decode('utf-8') == script_contents
def test_prepare_bin_dir_noconsole(tmpdir):
cmds = {
'acommand': {
'entry_point': 'somemod:somefunc',
'console': False
}
}
commands.prepare_bin_directory(tmpdir, cmds)
launcher_file = str(tmpdir / 'launcher_exe.dat')
launcher_noconsole_file = str(tmpdir / 'launcher_noconsole_exe.dat')
zip_file = str(tmpdir / 'acommand-append-noconsole.zip')
zip_file_invalid = str(tmpdir / 'acommand-append.zip')
exe_file = str(tmpdir / 'acommand.exe')
assert_isfile(launcher_file)
assert_isfile(launcher_noconsole_file)
assert_isfile(zip_file)
assert_not_path_exists(zip_file_invalid)
assert_not_path_exists(exe_file)
with open(launcher_file, 'rb') as lf:
assert lf.read(2) == b'MZ'
with open(launcher_noconsole_file, 'rb') as lf:
b_launcher = lf.read()
assert b_launcher[:2] == b'MZ'
with ZipFile(zip_file) as zf:
assert zf.testzip() is None
script_contents = zf.read('__main__.py').decode('utf-8')
assert 'import extra' not in script_contents
assert 'somefunc()' in script_contents
_assemble_launchers.main(['_assemble_launchers.py', 'C:\\custom\\python.exe', str(tmpdir)])
assert_isfile(exe_file)
with open(exe_file, 'rb') as ef, open(zip_file, 'rb') as zf:
b_exe = ef.read()
b_zip = zf.read()
assert b_exe[:len(b_launcher)] == b_launcher
assert b_exe[len(b_launcher):-len(b_zip)].decode('utf-8') == '#!"C:\\custom\\pythonw.exe"\r\n'
assert b_exe[-len(b_zip):] == b_zip
with ZipFile(exe_file) as zf:
assert zf.testzip() is None
assert zf.read('__main__.py').decode('utf-8') == script_contents
| 35.205882 | 103 | 0.659426 | 505 | 3,591 | 4.39604 | 0.142574 | 0.048649 | 0.046847 | 0.035135 | 0.895496 | 0.877477 | 0.869369 | 0.830631 | 0.794595 | 0.794595 | 0 | 0.003511 | 0.206906 | 3,591 | 101 | 104 | 35.554455 | 0.775983 | 0 | 0 | 0.731707 | 0 | 0 | 0.166527 | 0.066555 | 0 | 0 | 0 | 0 | 0.402439 | 1 | 0.02439 | false | 0 | 0.085366 | 0 | 0.109756 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
078e574c22f6d19e9e14545be249e29f1824e752 | 68,624 | py | Python | benchmarks/SimResults/combinations_spec_ml/cmp_bwavesgcccactusADMastar/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_ml/cmp_bwavesgcccactusADMastar/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_ml/cmp_bwavesgcccactusADMastar/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.053904,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.245027,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.289257,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.30597,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.529829,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.303872,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.13967,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.258091,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.89685,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0546468,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0110916,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.100462,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0820295,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.155109,
'Execution Unit/Register Files/Runtime Dynamic': 0.0931211,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.257525,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.672288,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 2.55548,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00163005,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00163005,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00143654,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000565279,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00117836,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.005875,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0150297,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.078857,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 5.01598,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.202685,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.267834,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.48166,
'Instruction Fetch Unit/Runtime Dynamic': 0.570281,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0706191,
'L2/Runtime Dynamic': 0.00856531,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.99773,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.33697,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0893123,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0893123,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 4.4212,
'Load Store Unit/Runtime Dynamic': 1.86674,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.220229,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.440458,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.07816,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0791919,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.311875,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0333121,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.605498,
'Memory Management Unit/Runtime Dynamic': 0.112504,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 23.0375,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.19065,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0179397,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.157466,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 0.366056,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 5.47963,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0346975,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.229942,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.198203,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.143415,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.231323,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.116764,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.491501,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.133639,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.42734,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0374448,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00601546,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0560496,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.044488,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0934944,
'Execution Unit/Register Files/Runtime Dynamic': 0.0505035,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.126757,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.335055,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.51238,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000830403,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000830403,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000758546,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000312934,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000639074,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00305843,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00670182,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0427675,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 2.72038,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.106931,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.145258,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 5.07092,
'Instruction Fetch Unit/Runtime Dynamic': 0.304716,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0342764,
'L2/Runtime Dynamic': 0.00451646,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.80425,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.758479,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0507005,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0507005,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.04367,
'Load Store Unit/Runtime Dynamic': 1.05922,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.125019,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.250038,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0443696,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0448742,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.169143,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0175596,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.401471,
'Memory Management Unit/Runtime Dynamic': 0.0624339,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 16.5671,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0984998,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0076692,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0719841,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.178153,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 3.12141,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0654642,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.254107,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.41382,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.173661,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.280109,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.14139,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.59516,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.135175,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.78193,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0781795,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00728413,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.074731,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0538706,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.152911,
'Execution Unit/Register Files/Runtime Dynamic': 0.0611548,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.173807,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.450544,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.76634,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000695062,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000695062,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000624775,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000252459,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000773856,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00278875,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00597187,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0517872,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 3.29411,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.144403,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.175893,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 5.6725,
'Instruction Fetch Unit/Runtime Dynamic': 0.380844,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0357269,
'L2/Runtime Dynamic': 0.00836787,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.21907,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.962932,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0641208,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0641209,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.52186,
'Load Store Unit/Runtime Dynamic': 1.34327,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.158111,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.316222,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0561141,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.056604,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.204816,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.023811,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.457319,
'Memory Management Unit/Runtime Dynamic': 0.0804151,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 18.0588,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.205655,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0103379,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0845308,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.300523,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 3.87977,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.019506,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.21801,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.119413,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.132224,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.213272,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.107653,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.453149,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.132919,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.28523,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0225596,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00554607,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0468358,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0410166,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0693955,
'Execution Unit/Register Files/Runtime Dynamic': 0.0465626,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.103548,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.269682,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.39278,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00132245,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00132245,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00119262,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000483977,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000589206,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00442672,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.011223,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0394303,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 2.5081,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.130226,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.133923,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 4.84834,
'Instruction Fetch Unit/Runtime Dynamic': 0.319229,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0349533,
'L2/Runtime Dynamic': 0.00909038,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.55452,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.64531,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0426209,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0426209,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.75578,
'Load Store Unit/Runtime Dynamic': 0.898123,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.105096,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.210192,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0372989,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0377348,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.155945,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0216124,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.376126,
'Memory Management Unit/Runtime Dynamic': 0.0593473,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 15.8899,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0593435,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00668778,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0662123,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.132244,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.81081,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 5.063209259898524,
'Runtime Dynamic': 5.063209259898524,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.273235,
'Runtime Dynamic': 0.0754315,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 73.8266,
'Peak Power': 106.939,
'Runtime Dynamic': 15.3671,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 73.5534,
'Total Cores/Runtime Dynamic': 15.2916,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.273235,
'Total L3s/Runtime Dynamic': 0.0754315,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.080963 | 124 | 0.682152 | 8,082 | 68,624 | 5.786192 | 0.067558 | 0.123514 | 0.112907 | 0.093405 | 0.93927 | 0.9314 | 0.918014 | 0.8869 | 0.862672 | 0.842037 | 0 | 0.132192 | 0.22428 | 68,624 | 914 | 125 | 75.080963 | 0.746285 | 0 | 0 | 0.642232 | 0 | 0 | 0.657268 | 0.048087 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
078fce293e27d01b167861d04545eba1bb3133a7 | 32,611 | py | Python | pyNastran/bdf/cards/elements/damper.py | Msegade/pyNastran | ae36548579c6bb2ee3a4fff207f7211c1986a5ab | [
"BSD-3-Clause"
] | null | null | null | pyNastran/bdf/cards/elements/damper.py | Msegade/pyNastran | ae36548579c6bb2ee3a4fff207f7211c1986a5ab | [
"BSD-3-Clause"
] | 1 | 2021-06-07T16:33:59.000Z | 2021-06-07T16:33:59.000Z | pyNastran/bdf/cards/elements/damper.py | Msegade/pyNastran | ae36548579c6bb2ee3a4fff207f7211c1986a5ab | [
"BSD-3-Clause"
] | 1 | 2021-10-14T03:52:44.000Z | 2021-10-14T03:52:44.000Z | """
All damper elements are defined in this file. This includes:
* CDAMP1
* CDAMP2
* CDAMP3
* CDAMP4
* CDAMP5
* CVISC
All damper elements are DamperElement and Element objects.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from pyNastran.utils.numpy_utils import integer_types
from pyNastran.bdf.cards.base_card import Element
from pyNastran.bdf.bdf_interface.assign_type import (
integer, integer_or_blank, double)
from pyNastran.bdf.field_writer_8 import print_card_8
if TYPE_CHECKING: # pragma: no cover
from pyNastran.bdf.bdf import BDF
class DamperElement(Element):
def __init__(self):
Element.__init__(self)
#def Centroid(self):
## same as below, but we ignore the 2nd point it it's None
##p = (self.nodes_ref[1].get_position() + self.nodes_ref[0].get_position()) / 2.
#p = self.nodes_ref[0].get_position()
#if self.nodes_ref[1] is not None:
#p += self.nodes_ref[1].get_position()
#p /= 2.
#return p
#def center_of_mass(self):
#return self.Centroid()
class LineDamper(DamperElement):
def __init__(self):
DamperElement.__init__(self)
class CDAMP1(LineDamper):
type = 'CDAMP1'
_field_map = {
1: 'eid', 2:'pid', 'c1':4, 'c2':6,
}
def _update_field_helper(self, n, value):
if n == 3:
self.nodes[0] = value
elif n == 5:
self.nodes[1] = value
else:
raise KeyError('Field %r=%r is an invalid %s entry.' % (n, value, self.type))
def __init__(self, eid, pid, nids, c1=0, c2=0, comment=''):
"""
Creates a CDAMP1 card
Parameters
----------
eid : int
element id
pid : int
property id (PDAMP)
nids : List[int, int]
node ids
c1 / c2 : int; default=0
DOF for nid1 / nid2
comment : str; default=''
a comment for the card
"""
LineDamper.__init__(self)
if comment:
self.comment = comment
self.eid = eid
self.pid = pid
self.c1 = c1
self.c2 = c2
self.nodes = self.prepare_node_ids(nids, allow_empty_nodes=True)
self.nodes_ref = None
self.pid_ref = None
@classmethod
def export_to_hdf5(cls, h5_file, model, eids):
"""exports the elements in a vectorized way"""
#comments = []
pids = []
nodes = []
components = []
for eid in eids:
element = model.elements[eid]
#comments.append(element.comment)
pids.append(element.pid)
nodes.append([nid if nid is not None else 0 for nid in element.nodes])
#components.append([comp if comp is not None else 0 for comp in [element.c1, element.c2]])
components.append([element.c1, element.c2])
#h5_file.create_dataset('_comment', data=comments)
h5_file.create_dataset('eid', data=eids)
h5_file.create_dataset('pid', data=pids)
h5_file.create_dataset('nodes', data=nodes)
h5_file.create_dataset('components', data=components)
@classmethod
def add_card(cls, card, comment=''):
"""
Adds a CDAMP1 card from ``BDF.add_card(...)``
Parameters
----------
card : BDFCard()
a BDFCard object
comment : str; default=''
a comment for the card
"""
eid = integer(card, 1, 'eid')
pid = integer_or_blank(card, 2, 'pid', eid)
nids = [integer_or_blank(card, 3, 'g1', 0),
integer_or_blank(card, 5, 'g2', 0)]
#: component number
c1 = integer_or_blank(card, 4, 'c1', 0)
c2 = integer_or_blank(card, 6, 'c2', 0)
assert len(card) <= 7, 'len(CDAMP1 card) = %i\ncard=%s' % (len(card), card)
return CDAMP1(eid, pid, nids, c1, c2, comment=comment)
@classmethod
def add_op2_data(cls, data, comment=''):
"""
Adds a CDAMP1 card from the OP2
Parameters
----------
data : List[varies]
a list of fields defined in OP2 format
comment : str; default=''
a comment for the card
"""
eid, pid, g1, g2, c1, c2 = data
nids = [g1, g2]
return CDAMP1(eid, pid, nids, c1, c2, comment=comment)
def validate(self):
msg = 'on\n%s\n is invalid validComponents=[0,1,2,3,4,5,6]' % str(self)
assert self.c1 in [0, 1, 2, 3, 4, 5, 6], 'c1=%r %s' % (self.c1, msg)
assert self.c2 in [0, 1, 2, 3, 4, 5, 6], 'c2=%r %s' % (self.c2, msg)
assert len(self.nodes) == 2
def _verify(self, xref):
eid = self.eid
pid = self.Pid()
nids = self.node_ids
assert isinstance(eid, integer_types)
assert isinstance(pid, integer_types)
for i, nid in enumerate(nids):
assert nid is None or isinstance(nid, integer_types), 'nid%i is not an None/integer; nid=%s' %(i, nid)
if xref:
if self.pid_ref.type in 'PDAMP':
b = self.B()
assert isinstance(b, float)
elif self.pid_ref.type in 'PDAMPT':
pass
else:
raise NotImplementedError('pid=%i self.pid_ref.type=%s' % (pid, self.pid_ref.type))
@property
def node_ids(self):
if self.nodes_ref is None:
return self.nodes
#return [nid if nid else None
#for nid in self._node_ids(nodes=self.nodes_ref, allow_empty_nodes=True)]
return self._node_ids(nodes=self.nodes_ref, allow_empty_nodes=True)
def get_edge_ids(self):
return [tuple(sorted(self.node_ids))]
def B(self):
return self.pid_ref.b
def cross_reference(self, model: BDF) -> None:
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
msg = ', which is required by CDAMP1 eid=%s' % self.eid
self.nodes_ref = model.EmptyNodes(self.nodes, msg=msg)
pid = self.pid
if pid in model.properties:
self.pid_ref = model.Property(pid, msg=msg)
elif pid in model.pdampt:
self.pid_ref = model.pdampt[pid]
else:
pids = model.properties.keys() + model.pdampt.keys()
pids.sort()
msg = ('pid=%i not found which is required by CDAMP1 eid=%i. '
'Allowed Pids=%s' % (self.pid, self.eid, pids))
raise KeyError(msg)
def safe_cross_reference(self, model, xref_errors):
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
self.cross_reference(model)
def uncross_reference(self) -> None:
"""Removes cross-reference links"""
self.nodes = self.node_ids
self.pid = self.Pid()
self.nodes_ref = None
self.pid_ref = None
def raw_fields(self):
nodes = self.node_ids
fields = ['CDAMP1', self.eid, self.Pid(), nodes[0], self.c1,
nodes[1], self.c2]
return fields
def write_card(self, size: int=8, is_double: bool=False) -> str:
card = self.raw_fields()
return self.comment + print_card_8(card)
class CDAMP2(LineDamper):
type = 'CDAMP2'
_field_map = {
1: 'eid', 2:'b', 'c1':4, 'c2':6,
}
cp_name_map = {'B' : 'b'}
def _update_field_helper(self, n, value):
if n == 3:
self.nodes[0] = value
elif n == 5:
self.nodes[1] = value
else:
raise KeyError('Field %r=%r is an invalid %s entry.' % (n, value, self.type))
def __init__(self, eid, b, nids, c1=0, c2=0, comment=''):
"""
Creates a CDAMP2 card
Parameters
----------
eid : int
element id
b : float
damping
nids : List[int, int]
SPOINT ids
node ids
c1 / c2 : int; default=0
DOF for nid1 / nid2
comment : str; default=''
a comment for the card
"""
LineDamper.__init__(self)
if comment:
self.comment = comment
self.eid = eid
#: Value of the scalar damper (Real)
self.b = b
#: component number
self.c1 = c1
self.c2 = c2
# CDAMP2 do not have to be unique
self.nodes = self.prepare_node_ids(nids, allow_empty_nodes=True)
self.nodes_ref = None
self.pid = 0
self.pid_ref = None
@classmethod
def export_to_hdf5(cls, h5_file, model, eids):
"""exports the elements in a vectorized way"""
#comments = []
b = []
nodes = []
components = []
for eid in eids:
element = model.elements[eid]
#comments.append(element.comment)
b.append(element.b)
nodes.append([nid if nid is not None else 0 for nid in element.nodes])
components.append([element.c1, element.c2])
#h5_file.create_dataset('_comment', data=comments)
h5_file.create_dataset('eid', data=eids)
h5_file.create_dataset('B', data=b)
h5_file.create_dataset('nodes', data=nodes)
h5_file.create_dataset('components', data=components)
@classmethod
def add_card(cls, card, comment=''):
"""
Adds a CDAMP2 card from ``BDF.add_card(...)``
Parameters
----------
card : BDFCard()
a BDFCard object
comment : str; default=''
a comment for the card
"""
eid = integer(card, 1, 'eid')
b = double(card, 2, 'b')
nids = [integer_or_blank(card, 3, 'n1', 0),
integer_or_blank(card, 5, 'n2', 0)]
c1 = integer_or_blank(card, 4, 'c1', 0)
c2 = integer_or_blank(card, 6, 'c2', 0)
assert len(card) <= 7, 'len(CDAMP2 card) = %i\ncard=%s' % (len(card), card)
return CDAMP2(eid, b, nids, c1, c2, comment=comment)
@classmethod
def add_op2_data(cls, data, comment=''):
"""
Adds a CDAMP2 card from the OP2
Parameters
----------
data : List[varies]
a list of fields defined in OP2 format
comment : str; default=''
a comment for the card
"""
eid = data[0]
b = data[1]
nids = [data[2], data[3]]
c1 = data[4]
c2 = data[5]
return CDAMP2(eid, b, nids, c1, c2, comment=comment)
def validate(self):
assert len(self.nodes) == 2
msg = 'on\n%s\n is invalid validComponents=[0,1,2,3,4,5,6]' % str(self)
assert self.c1 in [0, 1, 2, 3, 4, 5, 6], 'c1=%r %s' % (self.c1, msg)
assert self.c2 in [0, 1, 2, 3, 4, 5, 6], 'c2=%r %s' % (self.c2, msg)
@property
def node_ids(self):
return self._node_ids(nodes=self.nodes_ref, allow_empty_nodes=True)
def get_edge_ids(self):
node_ids = self._node_ids(nodes=self.nodes_ref, allow_empty_nodes=True)
if isinstance(node_ids[0], integer_types) and isinstance(node_ids[1], integer_types):
return [tuple(sorted(node_ids))]
return []
def B(self):
return self.b
def cross_reference(self, model: BDF) -> None:
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
msg = ', which is required by CDAMP2 eid=%s' % self.eid
self.nodes_ref = model.EmptyNodes(self.nodes, msg=msg)
def safe_cross_reference(self, model, xref_errors):
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
self.cross_reference(model)
def uncross_reference(self) -> None:
"""Removes cross-reference links"""
self.nodes = self.node_ids
self.nodes_ref = None
def _verify(self, xref):
eid = self.eid
b = self.B()
nids = self.node_ids
assert isinstance(eid, integer_types)
assert isinstance(b, float)
for i, nid in enumerate(nids):
assert nid is None or isinstance(nid, integer_types), 'nid%i is not an integer/None; nid=%s' %(i, nid)
def raw_fields(self):
nodes = self.node_ids
fields = ['CDAMP2', self.eid, self.b, nodes[0], self.c1,
nodes[1], self.c2]
return fields
def write_card(self, size: int=8, is_double: bool=False) -> str:
card = self.raw_fields()
return self.comment + print_card_8(card)
class CDAMP3(LineDamper):
"""
+--------+-----+-----+----+----+
| 1 | 2 | 3 | 4 | 5 |
+========+=====+=====+====+====+
| CDAMP3 | EID | PID | S1 | S2 |
+--------+-----+-----+----+----+
"""
type = 'CDAMP3'
_field_map = {
1: 'eid', 2:'pid',
}
def _update_field_helper(self, n, value):
if n == 3:
self.nodes[0] = value
elif n == 4:
self.nodes[1] = value
else:
raise KeyError('Field %r=%r is an invalid %s entry.' % (n, value, self.type))
def __init__(self, eid, pid, nids, comment=''):
"""
Creates a CDAMP3 card
Parameters
----------
eid : int
element id
pid : int
property id (PDAMP)
nids : List[int, int]
SPOINT ids
comment : str; default=''
a comment for the card
"""
if comment:
self.comment = comment
LineDamper.__init__(self)
self.eid = eid
self.pid = pid
self.nodes = self.prepare_node_ids(nids, allow_empty_nodes=True)
assert len(self.nodes) == 2
self.pid_ref = None
self.nodes_ref = None
@classmethod
def export_to_hdf5(cls, h5_file, model, eids):
"""exports the elements in a vectorized way"""
#comments = []
pids = []
nodes = []
components = []
for eid in eids:
element = model.elements[eid]
#comments.append(element.comment)
pids.append(element.pid)
nodes.append([nid if nid is not None else 0 for nid in element.nodes])
#h5_file.create_dataset('_comment', data=comments)
h5_file.create_dataset('eid', data=eids)
h5_file.create_dataset('pid', data=pids)
h5_file.create_dataset('nodes', data=nodes)
@classmethod
def add_card(cls, card, comment=''):
"""
Adds a CDAMP3 card from ``BDF.add_card(...)``
Parameters
----------
card : BDFCard()
a BDFCard object
comment : str; default=''
a comment for the card
"""
eid = integer(card, 1, 'eid')
pid = integer(card, 2, 'pid')
nids = [integer_or_blank(card, 3, 's1', 0),
integer_or_blank(card, 4, 's2', 0)]
assert len(card) <= 5, 'len(CDAMP3 card) = %i\ncard=%s' % (len(card), card)
return CDAMP3(eid, pid, nids, comment=comment)
@classmethod
def add_op2_data(cls, data, comment=''):
"""
Adds a CDAMP3 card from the OP2
Parameters
----------
data : List[varies]
a list of fields defined in OP2 format
comment : str; default=''
a comment for the card
"""
eid = data[0]
pid = data[1]
nids = [data[2], data[3]]
return CDAMP3(eid, pid, nids, comment=comment)
def _verify(self, xref):
eid = self.eid
pid = self.Pid()
b = self.B()
nids = self.node_ids
assert isinstance(eid, integer_types)
assert isinstance(pid, integer_types)
assert isinstance(b, float)
for i, nid in enumerate(nids):
assert nid is None or isinstance(nid, integer_types), 'nid%i is not an integer/None; nid=%s' % (i, nid)
if xref:
assert self.pid_ref.type in ['PDAMP'], 'pid=%i self.pid_ref.type=%s' % (pid, self.pid_ref.type)
def B(self):
return self.pid_ref.b
def cross_reference(self, model: BDF) -> None:
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
msg = ', which is required by CDAMP3 eid=%s' % (self.eid)
self.nodes_ref = model.EmptyNodes(self.nodes, msg=msg)
self.pid_ref = model.Property(self.pid, msg=msg)
def safe_cross_reference(self, model, xref_errors):
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
msg = ', which is required by CDAMP3 eid=%s' % self.eid
self.nodes_ref = model.EmptyNodes(self.nodes, msg=msg)
#self.nodes_ref = model.safe_empty_nodes(self.nodes, msg=msg)
self.pid_ref = model.safe_property(self.pid, self.eid, xref_errors, msg=msg)
def uncross_reference(self) -> None:
"""Removes cross-reference links"""
self.nodes = self.node_ids
self.pid = self.Pid()
self.pid_ref = None
self.nodes_ref = None
@property
def node_ids(self):
msg = ', which is required by CDAMP3 eid=%s' % (self.eid)
return self._node_ids(nodes=self.nodes_ref, allow_empty_nodes=True, msg=msg)
def raw_fields(self):
list_fields = ['CDAMP3', self.eid, self.Pid()] + self.node_ids
return list_fields
def write_card(self, size: int=8, is_double: bool=False) -> str:
card = self.raw_fields()
return self.comment + print_card_8(card)
class CDAMP4(LineDamper):
type = 'CDAMP4'
_field_map = {
1: 'eid', 2:'b',
}
def _update_field_helper(self, n, value):
if n == 3:
self.nodes[0] = value
elif n == 4:
self.nodes[1] = value
else:
raise KeyError('Field %r=%r is an invalid %s entry.' % (n, value, self.type))
def __init__(self, eid, b, nids, comment=''):
"""
Creates a CDAMP4 card
Parameters
----------
eid : int
element id
b : float
damping
nids : List[int, int]
SPOINT ids
comment : str; default=''
a comment for the card
"""
LineDamper.__init__(self)
if comment:
self.comment = comment
self.eid = eid
self.b = b
self.nids = nids
self.nodes = self.prepare_node_ids(nids, allow_empty_nodes=True)
assert len(self.nodes) == 2
self.nodes_ref = None
@classmethod
def export_to_hdf5(cls, h5_file, model, eids):
"""exports the elements in a vectorized way"""
#comments = []
b = []
nodes = []
for eid in eids:
element = model.elements[eid]
#comments.append(element.comment)
b.append(element.b)
nodes.append([nid if nid is not None else 0 for nid in element.nodes])
#h5_file.create_dataset('_comment', data=comments)
h5_file.create_dataset('eid', data=eids)
h5_file.create_dataset('B', data=b)
h5_file.create_dataset('nodes', data=nodes)
@classmethod
def add_card(cls, card, icard=0, comment=''):
ioffset = icard * 4
eid = integer(card, 1 + ioffset, 'eid')
#: Value of the scalar damper (Real)
b = double(card, 2 + ioffset, 'b')
nids = [
integer_or_blank(card, 3 + ioffset, 'n1', 0),
integer_or_blank(card, 4 + ioffset, 'n2', 0)
]
assert len(card) <= 9, 'len(CDAMP4 card) = %i\ncard=%s' % (len(card), card)
return CDAMP4(eid, b, nids, comment=comment)
@classmethod
def add_op2_data(cls, data, comment=''):
"""
Adds a CDAMP4 card from the OP2
Parameters
----------
data : List[varies]
a list of fields defined in OP2 format
comment : str; default=''
a comment for the card
"""
eid = data[0]
b = data[1]
nids = [data[2], data[3]]
return CDAMP4(eid, b, nids, comment=comment)
def _verify(self, xref):
eid = self.eid
b = self.B()
nids = self.node_ids
assert isinstance(eid, integer_types)
assert isinstance(b, float)
for i, nid in enumerate(nids):
assert nid is None or isinstance(nid, integer_types), 'nid%i is not an integer/None; nid=%s' % (i, nid)
@property
def node_ids(self):
if self.nodes_ref is None:
return self.nodes
msg = ', which is required by CDAMP4 eid=%s' % (self.eid)
nids = self._node_ids(nodes=self.nodes_ref, allow_empty_nodes=True, msg=msg)
return nids
def B(self):
return self.b
def cross_reference(self, model: BDF) -> None:
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
msg = ', which is required by CDAMP4 eid=%s' % (self.eid)
self.nodes_ref = model.EmptyNodes(self.node_ids, msg=msg)
def safe_cross_reference(self, model, xref_errors):
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
#msg = ', which is required by CDAMP4 eid=%s' % (self.eid)
#self.nodes_ref = model.safe_empty_nodes(self.node_ids, msg=msg)
self.cross_reference(model)
def uncross_reference(self) -> None:
"""Removes cross-reference links"""
self.nodes = self.node_ids
self.nodes_ref = None
def raw_fields(self):
list_fields = ['CDAMP4', self.eid, self.b] + self.node_ids
return list_fields
def write_card(self, size: int=8, is_double: bool=False) -> str:
card = self.raw_fields()
return self.comment + print_card_8(card)
class CDAMP5(LineDamper):
"""
Defines a damping element that refers to a material property entry and connection to
grid or scalar points.
"""
type = 'CDAMP5'
_field_map = {
1: 'eid', 2:'pid',
}
def _update_field_helper(self, n, value):
if n == 3:
self.nodes[0] = value
elif n == 4:
self.nodes[1] = value
else:
raise KeyError('Field %r=%r is an invalid %s entry.' % (n, value, self.type))
def __init__(self, eid, pid, nids, comment=''):
"""
Creates a CDAMP5 card
Parameters
----------
eid : int
element id
pid : int
property id (PDAMP5)
nids : List[int, int]
GRID/SPOINT ids
comment : str; default=''
a comment for the card
"""
LineDamper.__init__(self)
if comment:
self.comment = comment
self.eid = eid
#: Property ID
self.pid = pid
self.nodes = self.prepare_node_ids(nids, allow_empty_nodes=True)
assert len(self.nodes) == 2
self.nodes_ref = None
self.pid_ref = None
@classmethod
def export_to_hdf5(cls, h5_file, model, eids):
"""exports the elements in a vectorized way"""
#comments = []
pids = []
nodes = []
components = []
for eid in eids:
element = model.elements[eid]
#comments.append(element.comment)
pids.append(element.pid)
nodes.append([nid if nid is not None else 0 for nid in element.nodes])
#h5_file.create_dataset('_comment', data=comments)
h5_file.create_dataset('eid', data=eids)
h5_file.create_dataset('pid', data=pids)
h5_file.create_dataset('nodes', data=nodes)
@classmethod
def add_card(cls, card, comment=''):
"""
Adds a CDAMP5 card from ``BDF.add_card(...)``
Parameters
----------
card : BDFCard()
a BDFCard object
comment : str; default=''
a comment for the card
"""
eid = integer(card, 1, 'eid')
pid = integer(card, 2, 'pid')
nids = [integer_or_blank(card, 3, 'n1', 0),
integer_or_blank(card, 4, 'n2', 0)]
assert len(card) <= 5, 'len(CDAMP5 card) = %i\ncard=%s' % (len(card), card)
return CDAMP5(eid, pid, nids, comment=comment)
@classmethod
def add_op2_data(cls, data, comment=''):
"""
Adds a CDAMP5 card from the OP2
Parameters
----------
data : List[varies]
a list of fields defined in OP2 format
comment : str; default=''
a comment for the card
"""
eid = data[0]
pid = data[1]
nids = [data[2], data[3]]
return CDAMP5(eid, pid, nids, comment=comment)
def _verify(self, xref):
eid = self.eid
pid = self.Pid()
nids = self.node_ids
assert isinstance(eid, integer_types)
assert isinstance(pid, integer_types)
for i, nid in enumerate(nids):
assert nid is None or isinstance(nid, integer_types), 'nid%i is not an integer/None; nid=%s' % (i, nid)
if xref:
assert self.pid_ref.type in ['PDAMP5'], 'pid=%i self.pid_ref.type=%s' % (pid, self.pid_ref.type)
b = self.B()
assert isinstance(b, float)
def cross_reference(self, model: BDF) -> None:
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
msg = ', which is required by CDAMP5 eid=%s' % (self.eid)
self.nodes_ref = model.EmptyNodes(self.node_ids, msg=msg)
self.pid_ref = model.Property(self.pid, msg=msg)
def safe_cross_reference(self, model, xref_errors):
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
msg = ', which is required by CDAMP5 eid=%s' % (self.eid)
self.nodes_ref = model.EmptyNodes(self.node_ids, msg=msg)
self.pid_ref = model.safe_property(self.pid, self.eid, xref_errors, msg=msg)
def uncross_reference(self) -> None:
"""Removes cross-reference links"""
self.nodes = self.node_ids
self.pid = self.Pid()
self.nodes_ref = None
self.pid_ref = None
def B(self):
return self.pid_ref.b
@property
def node_ids(self):
return self._node_ids(nodes=self.nodes_ref, allow_empty_nodes=True)
def raw_fields(self):
nodes = self.node_ids
list_fields = ['CDAMP5', self.eid, self.Pid(), nodes[0], nodes[1]]
return list_fields
def write_card(self, size: int=8, is_double: bool=False) -> str:
card = self.raw_fields()
return self.comment + print_card_8(card)
class CVISC(LineDamper):
"""
Viscous Damper Connection
Defines a viscous damper element.
+-------+-----+-----+----+----+
| 1 | 2 | 3 | 4 | 5 |
+=======+=====+=====+====+====+
| CVISC | EID | PID | G1 | G2 |
+-------+-----+-----+----+----+
"""
type = 'CVISC'
_field_map = {
1: 'eid', 2:'pid',
}
def _update_field_helper(self, n, value):
if n == 3:
self.nodes[0] = value
elif n == 4:
self.nodes[1] = value
else:
raise KeyError('Field %r=%r is an invalid %s entry.' % (n, value, self.type))
def __init__(self, eid, pid, nids, comment=''):
"""
Creates a CVISC card
Parameters
----------
eid : int
element id
pid : int
property id (PVISC)
nids : List[int, int]
GRID ids
comment : str; default=''
a comment for the card
"""
LineDamper.__init__(self)
if comment:
self.comment = comment
self.eid = eid
self.pid = pid
self.nodes = self.prepare_node_ids(nids)
assert len(self.nodes) == 2
self.nodes_ref = None
self.pid_ref = None
@classmethod
def export_to_hdf5(cls, h5_file, model, eids):
"""exports the elements in a vectorized way"""
#comments = []
pids = []
nodes = []
components = []
for eid in eids:
element = model.elements[eid]
#comments.append(element.comment)
pids.append(element.pid)
nodes.append(element.nodes)
#h5_file.create_dataset('_comment', data=comments)
h5_file.create_dataset('eid', data=eids)
h5_file.create_dataset('pid', data=pids)
h5_file.create_dataset('nodes', data=nodes)
@classmethod
def add_card(cls, card, comment=''):
"""
Adds a CVISC card from ``BDF.add_card(...)``
Parameters
----------
card : BDFCard()
a BDFCard object
comment : str; default=''
a comment for the card
"""
eid = integer(card, 1, 'eid')
pid = integer_or_blank(card, 2, 'pid', eid)
nids = [integer_or_blank(card, 3, 'n1', 0),
integer_or_blank(card, 4, 'n2', 0)]
assert len(card) <= 5, 'len(CVISC card) = %i\ncard=%s' % (len(card), card)
return CVISC(eid, pid, nids, comment=comment)
@classmethod
def add_op2_data(cls, data, comment=''):
"""
Adds a CVISC card from the OP2
Parameters
----------
data : List[varies]
a list of fields defined in OP2 format
comment : str; default=''
a comment for the card
"""
eid = data[0]
pid = data[1]
nids = data[2:4]
return CVISC(eid, pid, nids, comment=comment)
def cross_reference(self, model: BDF) -> None:
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
msg = ', which is required by CVISC eid=%s' % self.eid
self.nodes_ref = model.Nodes(self.nodes, msg=msg)
self.pid_ref = model.Property(self.pid, msg=msg)
def safe_cross_reference(self, model, xref_errors):
"""
Cross links the card so referenced cards can be extracted directly
Parameters
----------
model : BDF()
the BDF object
"""
msg = ', which is required by CVISC eid=%s' % (self.eid)
self.nodes_ref = model.EmptyNodes(self.node_ids, msg=msg)
self.pid_ref = model.safe_property(self.pid, self.eid, xref_errors, msg=msg)
def uncross_reference(self) -> None:
"""Removes cross-reference links"""
self.nodes = self.node_ids
self.pid = self.Pid()
self.nodes_ref = None
self.pid_ref = None
def _verify(self, xref):
eid = self.eid
pid = self.Pid()
b = self.B()
nids = self.node_ids
assert isinstance(eid, integer_types)
assert isinstance(pid, integer_types)
assert isinstance(b, float)
for i, nid in enumerate(nids):
assert nid is None or isinstance(nid, integer_types), 'nid%i is not an integer/None; nid=%s' % (i, nid)
if xref:
assert self.pid_ref.type in ['PVISC'], 'pid=%i self.pid_ref.type=%s' % (pid, self.pid_ref.type)
def B(self):
return self.pid_ref.ce
def get_edge_ids(self):
return [tuple(sorted(self.node_ids))]
@property
def node_ids(self):
return self._node_ids(nodes=self.nodes_ref, allow_empty_nodes=True)
def raw_fields(self):
list_fields = ['CVISC', self.eid, self.Pid()] + self.node_ids
return list_fields
def repr_fields(self):
return self.raw_fields()
def write_card(self, size: int=8, is_double: bool=False) -> str:
card = self.raw_fields()
return self.comment + print_card_8(card)
| 31.058095 | 115 | 0.546257 | 4,164 | 32,611 | 4.149856 | 0.055476 | 0.041146 | 0.026389 | 0.028588 | 0.888079 | 0.877199 | 0.852257 | 0.838252 | 0.814757 | 0.802951 | 0 | 0.019159 | 0.326178 | 32,611 | 1,049 | 116 | 31.087703 | 0.767225 | 0.216982 | 0 | 0.809269 | 0 | 0.003565 | 0.068643 | 0.002668 | 0 | 0 | 0 | 0 | 0.073084 | 1 | 0.153298 | false | 0.001783 | 0.012478 | 0.02139 | 0.279857 | 0.012478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
07b05d2433e86387435219457f0a1a328b170c1e | 367 | py | Python | python/python_backup/Python_Progs/PYTHON_LEGACY_PROJECTS/Environment_Variables.py | SayanGhoshBDA/code-backup | 8b6135facc0e598e9686b2e8eb2d69dd68198b80 | [
"MIT"
] | 16 | 2018-11-26T08:39:42.000Z | 2019-05-08T10:09:52.000Z | python/python_backup/Python_Progs/PYTHON_LEGACY_PROJECTS/Environment_Variables.py | SayanGhoshBDA/code-backup | 8b6135facc0e598e9686b2e8eb2d69dd68198b80 | [
"MIT"
] | 8 | 2020-05-04T06:29:26.000Z | 2022-02-12T05:33:16.000Z | python/python_backup/Python_Progs/PYTHON_LEGACY_PROJECTS/Environment_Variables.py | SayanGhoshBDA/code-backup | 8b6135facc0e598e9686b2e8eb2d69dd68198b80 | [
"MIT"
] | 5 | 2020-02-11T16:02:21.000Z | 2021-02-05T07:48:30.000Z | #Program to acess environment variables
import os
print('*-------------------------------------*')
print(os.environ)
print('*-------------------------------------*')
#Acess a particular environment variable
print(os.environ['SSH_AUTH_SOCK'])
print('*-------------------------------------*')
print(os.environ['PATH'])
print('*-------------------------------------*')
| 30.583333 | 48 | 0.425068 | 29 | 367 | 5.310345 | 0.551724 | 0.136364 | 0.272727 | 0.246753 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054496 | 367 | 11 | 49 | 33.363636 | 0.443804 | 0.209809 | 0 | 0.5 | 0 | 0 | 0.602787 | 0.543554 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0.875 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
07dd99240c2b1a9b921d5fb0f8e9c8fdcadeffac | 62,885 | py | Python | pybind/slxos/v16r_1_00b/brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class show_mpls_lsp_frr_info(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-mpls - based on the path /brocade_mpls_rpc/show-mpls-bypass-lsp-detail/output/bypass-lsp/show-mpls-lsp-detail-info/show-mpls-lsp-frr-info. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__lsp_frr_operational_status','__lsp_frr_operational_status_active','__lsp_frr_down_reason','__lsp_frr_computation_mode_default','__lsp_frr_computation_mode_use_bypass_metric','__lsp_frr_computation_mode_use_bypass_liberal','__lsp_frr_group_computation_mode_default','__lsp_frr_group_computation_mode_add_penalty','__lsp_frr_group_computation_mode_exclude_groups','__lsp_frr_group_computation_mode_high_cost','__lsp_frr_out_port_id','__lsp_frr_out_port_name','__lsp_frr_out_label','__lsp_frr_path_cost','__lsp_frr_bypass_name','__lsp_frr_forwarding_protected_up','__lsp_frr_secondary_swithover_time','__lsp_frr_hold_time',)
_yang_name = 'show-mpls-lsp-frr-info'
_rest_name = ''
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__lsp_frr_computation_mode_use_bypass_metric = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-use-bypass-metric", rest_name="lsp-frr-computation-mode-use-bypass-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_frr_group_computation_mode_add_penalty = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-add-penalty", rest_name="lsp-frr-group-computation-mode-add-penalty", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_frr_computation_mode_default = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-default", rest_name="lsp-frr-computation-mode-default", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_frr_hold_time = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-hold-time", rest_name="lsp-frr-hold-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__lsp_frr_out_label = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-out-label", rest_name="lsp-frr-out-label", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__lsp_frr_forwarding_protected_up = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-forwarding-protected-up", rest_name="lsp-frr-forwarding-protected-up", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_frr_operational_status = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-operational-status", rest_name="lsp-frr-operational-status", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_frr_secondary_swithover_time = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-secondary-swithover-time", rest_name="lsp-frr-secondary-swithover-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__lsp_frr_computation_mode_use_bypass_liberal = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-use-bypass-liberal", rest_name="lsp-frr-computation-mode-use-bypass-liberal", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_frr_out_port_name = YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-frr-out-port-name", rest_name="lsp-frr-out-port-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
self.__lsp_frr_group_computation_mode_exclude_groups = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-exclude-groups", rest_name="lsp-frr-group-computation-mode-exclude-groups", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_frr_group_computation_mode_default = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-default", rest_name="lsp-frr-group-computation-mode-default", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_frr_group_computation_mode_high_cost = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-high-cost", rest_name="lsp-frr-group-computation-mode-high-cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_frr_path_cost = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-path-cost", rest_name="lsp-frr-path-cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__lsp_frr_operational_status_active = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-operational-status-active", rest_name="lsp-frr-operational-status-active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_frr_bypass_name = YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-frr-bypass-name", rest_name="lsp-frr-bypass-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
self.__lsp_frr_out_port_id = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-out-port-id", rest_name="lsp-frr-out-port-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__lsp_frr_down_reason = YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-frr-down-reason", rest_name="lsp-frr-down-reason", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'brocade_mpls_rpc', u'show-mpls-bypass-lsp-detail', u'output', u'bypass-lsp', u'show-mpls-lsp-detail-info', u'show-mpls-lsp-frr-info']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'show-mpls-bypass-lsp-detail', u'output', u'bypass-lsp']
def _get_lsp_frr_operational_status(self):
"""
Getter method for lsp_frr_operational_status, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_operational_status (boolean)
YANG Description: LSP detour or backup path operational status
"""
return self.__lsp_frr_operational_status
def _set_lsp_frr_operational_status(self, v, load=False):
"""
Setter method for lsp_frr_operational_status, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_operational_status (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_operational_status is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_operational_status() directly.
YANG Description: LSP detour or backup path operational status
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-frr-operational-status", rest_name="lsp-frr-operational-status", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_operational_status must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-operational-status", rest_name="lsp-frr-operational-status", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_frr_operational_status = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_operational_status(self):
self.__lsp_frr_operational_status = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-operational-status", rest_name="lsp-frr-operational-status", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_frr_operational_status_active(self):
"""
Getter method for lsp_frr_operational_status_active, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_operational_status_active (boolean)
YANG Description: LSP detour or backup path operational status is active
"""
return self.__lsp_frr_operational_status_active
def _set_lsp_frr_operational_status_active(self, v, load=False):
"""
Setter method for lsp_frr_operational_status_active, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_operational_status_active (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_operational_status_active is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_operational_status_active() directly.
YANG Description: LSP detour or backup path operational status is active
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-frr-operational-status-active", rest_name="lsp-frr-operational-status-active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_operational_status_active must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-operational-status-active", rest_name="lsp-frr-operational-status-active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_frr_operational_status_active = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_operational_status_active(self):
self.__lsp_frr_operational_status_active = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-operational-status-active", rest_name="lsp-frr-operational-status-active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_frr_down_reason(self):
"""
Getter method for lsp_frr_down_reason, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_down_reason (string)
YANG Description: LSP detour or backup down reason
"""
return self.__lsp_frr_down_reason
def _set_lsp_frr_down_reason(self, v, load=False):
"""
Setter method for lsp_frr_down_reason, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_down_reason (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_down_reason is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_down_reason() directly.
YANG Description: LSP detour or backup down reason
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=unicode, is_leaf=True, yang_name="lsp-frr-down-reason", rest_name="lsp-frr-down-reason", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_down_reason must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-frr-down-reason", rest_name="lsp-frr-down-reason", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)""",
})
self.__lsp_frr_down_reason = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_down_reason(self):
self.__lsp_frr_down_reason = YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-frr-down-reason", rest_name="lsp-frr-down-reason", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
def _get_lsp_frr_computation_mode_default(self):
"""
Getter method for lsp_frr_computation_mode_default, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_computation_mode_default (boolean)
YANG Description: LSP FRR path computaion mode default
"""
return self.__lsp_frr_computation_mode_default
def _set_lsp_frr_computation_mode_default(self, v, load=False):
"""
Setter method for lsp_frr_computation_mode_default, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_computation_mode_default (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_computation_mode_default is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_computation_mode_default() directly.
YANG Description: LSP FRR path computaion mode default
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-default", rest_name="lsp-frr-computation-mode-default", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_computation_mode_default must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-default", rest_name="lsp-frr-computation-mode-default", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_frr_computation_mode_default = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_computation_mode_default(self):
self.__lsp_frr_computation_mode_default = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-default", rest_name="lsp-frr-computation-mode-default", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_frr_computation_mode_use_bypass_metric(self):
"""
Getter method for lsp_frr_computation_mode_use_bypass_metric, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_computation_mode_use_bypass_metric (boolean)
YANG Description: LSP FRR path computaion mode is use bupass metric
"""
return self.__lsp_frr_computation_mode_use_bypass_metric
def _set_lsp_frr_computation_mode_use_bypass_metric(self, v, load=False):
"""
Setter method for lsp_frr_computation_mode_use_bypass_metric, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_computation_mode_use_bypass_metric (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_computation_mode_use_bypass_metric is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_computation_mode_use_bypass_metric() directly.
YANG Description: LSP FRR path computaion mode is use bupass metric
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-use-bypass-metric", rest_name="lsp-frr-computation-mode-use-bypass-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_computation_mode_use_bypass_metric must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-use-bypass-metric", rest_name="lsp-frr-computation-mode-use-bypass-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_frr_computation_mode_use_bypass_metric = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_computation_mode_use_bypass_metric(self):
self.__lsp_frr_computation_mode_use_bypass_metric = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-use-bypass-metric", rest_name="lsp-frr-computation-mode-use-bypass-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_frr_computation_mode_use_bypass_liberal(self):
"""
Getter method for lsp_frr_computation_mode_use_bypass_liberal, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_computation_mode_use_bypass_liberal (boolean)
YANG Description: LSP FRR path computaion mode is use bypass liberal
"""
return self.__lsp_frr_computation_mode_use_bypass_liberal
def _set_lsp_frr_computation_mode_use_bypass_liberal(self, v, load=False):
"""
Setter method for lsp_frr_computation_mode_use_bypass_liberal, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_computation_mode_use_bypass_liberal (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_computation_mode_use_bypass_liberal is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_computation_mode_use_bypass_liberal() directly.
YANG Description: LSP FRR path computaion mode is use bypass liberal
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-use-bypass-liberal", rest_name="lsp-frr-computation-mode-use-bypass-liberal", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_computation_mode_use_bypass_liberal must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-use-bypass-liberal", rest_name="lsp-frr-computation-mode-use-bypass-liberal", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_frr_computation_mode_use_bypass_liberal = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_computation_mode_use_bypass_liberal(self):
self.__lsp_frr_computation_mode_use_bypass_liberal = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-computation-mode-use-bypass-liberal", rest_name="lsp-frr-computation-mode-use-bypass-liberal", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_frr_group_computation_mode_default(self):
"""
Getter method for lsp_frr_group_computation_mode_default, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_group_computation_mode_default (boolean)
YANG Description: LSP FRR path computaion group mode default
"""
return self.__lsp_frr_group_computation_mode_default
def _set_lsp_frr_group_computation_mode_default(self, v, load=False):
"""
Setter method for lsp_frr_group_computation_mode_default, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_group_computation_mode_default (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_group_computation_mode_default is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_group_computation_mode_default() directly.
YANG Description: LSP FRR path computaion group mode default
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-default", rest_name="lsp-frr-group-computation-mode-default", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_group_computation_mode_default must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-default", rest_name="lsp-frr-group-computation-mode-default", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_frr_group_computation_mode_default = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_group_computation_mode_default(self):
self.__lsp_frr_group_computation_mode_default = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-default", rest_name="lsp-frr-group-computation-mode-default", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_frr_group_computation_mode_add_penalty(self):
"""
Getter method for lsp_frr_group_computation_mode_add_penalty, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_group_computation_mode_add_penalty (boolean)
YANG Description: LSP FRR path computaion group mode is add penalty
"""
return self.__lsp_frr_group_computation_mode_add_penalty
def _set_lsp_frr_group_computation_mode_add_penalty(self, v, load=False):
"""
Setter method for lsp_frr_group_computation_mode_add_penalty, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_group_computation_mode_add_penalty (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_group_computation_mode_add_penalty is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_group_computation_mode_add_penalty() directly.
YANG Description: LSP FRR path computaion group mode is add penalty
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-add-penalty", rest_name="lsp-frr-group-computation-mode-add-penalty", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_group_computation_mode_add_penalty must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-add-penalty", rest_name="lsp-frr-group-computation-mode-add-penalty", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_frr_group_computation_mode_add_penalty = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_group_computation_mode_add_penalty(self):
self.__lsp_frr_group_computation_mode_add_penalty = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-add-penalty", rest_name="lsp-frr-group-computation-mode-add-penalty", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_frr_group_computation_mode_exclude_groups(self):
"""
Getter method for lsp_frr_group_computation_mode_exclude_groups, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_group_computation_mode_exclude_groups (boolean)
YANG Description: LSP FRR path computaion group mode is exclude groups
"""
return self.__lsp_frr_group_computation_mode_exclude_groups
def _set_lsp_frr_group_computation_mode_exclude_groups(self, v, load=False):
"""
Setter method for lsp_frr_group_computation_mode_exclude_groups, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_group_computation_mode_exclude_groups (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_group_computation_mode_exclude_groups is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_group_computation_mode_exclude_groups() directly.
YANG Description: LSP FRR path computaion group mode is exclude groups
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-exclude-groups", rest_name="lsp-frr-group-computation-mode-exclude-groups", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_group_computation_mode_exclude_groups must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-exclude-groups", rest_name="lsp-frr-group-computation-mode-exclude-groups", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_frr_group_computation_mode_exclude_groups = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_group_computation_mode_exclude_groups(self):
self.__lsp_frr_group_computation_mode_exclude_groups = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-exclude-groups", rest_name="lsp-frr-group-computation-mode-exclude-groups", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_frr_group_computation_mode_high_cost(self):
"""
Getter method for lsp_frr_group_computation_mode_high_cost, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_group_computation_mode_high_cost (boolean)
YANG Description: LSP FRR path computaion group mode is exclude groups
"""
return self.__lsp_frr_group_computation_mode_high_cost
def _set_lsp_frr_group_computation_mode_high_cost(self, v, load=False):
"""
Setter method for lsp_frr_group_computation_mode_high_cost, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_group_computation_mode_high_cost (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_group_computation_mode_high_cost is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_group_computation_mode_high_cost() directly.
YANG Description: LSP FRR path computaion group mode is exclude groups
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-high-cost", rest_name="lsp-frr-group-computation-mode-high-cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_group_computation_mode_high_cost must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-high-cost", rest_name="lsp-frr-group-computation-mode-high-cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_frr_group_computation_mode_high_cost = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_group_computation_mode_high_cost(self):
self.__lsp_frr_group_computation_mode_high_cost = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-group-computation-mode-high-cost", rest_name="lsp-frr-group-computation-mode-high-cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_frr_out_port_id(self):
"""
Getter method for lsp_frr_out_port_id, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_out_port_id (uint32)
YANG Description: LSP detour or backup path outgoing port id
"""
return self.__lsp_frr_out_port_id
def _set_lsp_frr_out_port_id(self, v, load=False):
"""
Setter method for lsp_frr_out_port_id, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_out_port_id (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_out_port_id is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_out_port_id() directly.
YANG Description: LSP detour or backup path outgoing port id
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-out-port-id", rest_name="lsp-frr-out-port-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_out_port_id must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-out-port-id", rest_name="lsp-frr-out-port-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__lsp_frr_out_port_id = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_out_port_id(self):
self.__lsp_frr_out_port_id = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-out-port-id", rest_name="lsp-frr-out-port-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_lsp_frr_out_port_name(self):
"""
Getter method for lsp_frr_out_port_name, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_out_port_name (string)
YANG Description: LSP detour or backup path outgoing port name
"""
return self.__lsp_frr_out_port_name
def _set_lsp_frr_out_port_name(self, v, load=False):
"""
Setter method for lsp_frr_out_port_name, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_out_port_name (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_out_port_name is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_out_port_name() directly.
YANG Description: LSP detour or backup path outgoing port name
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=unicode, is_leaf=True, yang_name="lsp-frr-out-port-name", rest_name="lsp-frr-out-port-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_out_port_name must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-frr-out-port-name", rest_name="lsp-frr-out-port-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)""",
})
self.__lsp_frr_out_port_name = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_out_port_name(self):
self.__lsp_frr_out_port_name = YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-frr-out-port-name", rest_name="lsp-frr-out-port-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
def _get_lsp_frr_out_label(self):
"""
Getter method for lsp_frr_out_label, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_out_label (uint32)
YANG Description: LSP detour or backup path outgoing label
"""
return self.__lsp_frr_out_label
def _set_lsp_frr_out_label(self, v, load=False):
"""
Setter method for lsp_frr_out_label, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_out_label (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_out_label is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_out_label() directly.
YANG Description: LSP detour or backup path outgoing label
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-out-label", rest_name="lsp-frr-out-label", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_out_label must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-out-label", rest_name="lsp-frr-out-label", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__lsp_frr_out_label = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_out_label(self):
self.__lsp_frr_out_label = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-out-label", rest_name="lsp-frr-out-label", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_lsp_frr_path_cost(self):
"""
Getter method for lsp_frr_path_cost, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_path_cost (uint32)
YANG Description: LSP detour or backup path cost
"""
return self.__lsp_frr_path_cost
def _set_lsp_frr_path_cost(self, v, load=False):
"""
Setter method for lsp_frr_path_cost, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_path_cost (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_path_cost is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_path_cost() directly.
YANG Description: LSP detour or backup path cost
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-path-cost", rest_name="lsp-frr-path-cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_path_cost must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-path-cost", rest_name="lsp-frr-path-cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__lsp_frr_path_cost = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_path_cost(self):
self.__lsp_frr_path_cost = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-path-cost", rest_name="lsp-frr-path-cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_lsp_frr_bypass_name(self):
"""
Getter method for lsp_frr_bypass_name, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_bypass_name (string)
YANG Description: LSP backup path bypass name
"""
return self.__lsp_frr_bypass_name
def _set_lsp_frr_bypass_name(self, v, load=False):
"""
Setter method for lsp_frr_bypass_name, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_bypass_name (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_bypass_name is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_bypass_name() directly.
YANG Description: LSP backup path bypass name
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=unicode, is_leaf=True, yang_name="lsp-frr-bypass-name", rest_name="lsp-frr-bypass-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_bypass_name must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-frr-bypass-name", rest_name="lsp-frr-bypass-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)""",
})
self.__lsp_frr_bypass_name = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_bypass_name(self):
self.__lsp_frr_bypass_name = YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-frr-bypass-name", rest_name="lsp-frr-bypass-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
def _get_lsp_frr_forwarding_protected_up(self):
"""
Getter method for lsp_frr_forwarding_protected_up, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_forwarding_protected_up (boolean)
YANG Description: LSP FRR forwarding statee protected is up
"""
return self.__lsp_frr_forwarding_protected_up
def _set_lsp_frr_forwarding_protected_up(self, v, load=False):
"""
Setter method for lsp_frr_forwarding_protected_up, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_forwarding_protected_up (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_forwarding_protected_up is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_forwarding_protected_up() directly.
YANG Description: LSP FRR forwarding statee protected is up
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-frr-forwarding-protected-up", rest_name="lsp-frr-forwarding-protected-up", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_forwarding_protected_up must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-forwarding-protected-up", rest_name="lsp-frr-forwarding-protected-up", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_frr_forwarding_protected_up = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_forwarding_protected_up(self):
self.__lsp_frr_forwarding_protected_up = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-frr-forwarding-protected-up", rest_name="lsp-frr-forwarding-protected-up", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_frr_secondary_swithover_time(self):
"""
Getter method for lsp_frr_secondary_swithover_time, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_secondary_swithover_time (uint32)
YANG Description: LSP secondary switchover time
"""
return self.__lsp_frr_secondary_swithover_time
def _set_lsp_frr_secondary_swithover_time(self, v, load=False):
"""
Setter method for lsp_frr_secondary_swithover_time, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_secondary_swithover_time (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_secondary_swithover_time is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_secondary_swithover_time() directly.
YANG Description: LSP secondary switchover time
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-secondary-swithover-time", rest_name="lsp-frr-secondary-swithover-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_secondary_swithover_time must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-secondary-swithover-time", rest_name="lsp-frr-secondary-swithover-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__lsp_frr_secondary_swithover_time = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_secondary_swithover_time(self):
self.__lsp_frr_secondary_swithover_time = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-secondary-swithover-time", rest_name="lsp-frr-secondary-swithover-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_lsp_frr_hold_time(self):
"""
Getter method for lsp_frr_hold_time, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_hold_time (uint32)
YANG Description: LSP lsp hold time
"""
return self.__lsp_frr_hold_time
def _set_lsp_frr_hold_time(self, v, load=False):
"""
Setter method for lsp_frr_hold_time, mapped from YANG variable /brocade_mpls_rpc/show_mpls_bypass_lsp_detail/output/bypass_lsp/show_mpls_lsp_detail_info/show_mpls_lsp_frr_info/lsp_frr_hold_time (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_frr_hold_time is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_frr_hold_time() directly.
YANG Description: LSP lsp hold time
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-hold-time", rest_name="lsp-frr-hold-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_frr_hold_time must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-hold-time", rest_name="lsp-frr-hold-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__lsp_frr_hold_time = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_frr_hold_time(self):
self.__lsp_frr_hold_time = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-frr-hold-time", rest_name="lsp-frr-hold-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
lsp_frr_operational_status = __builtin__.property(_get_lsp_frr_operational_status, _set_lsp_frr_operational_status)
lsp_frr_operational_status_active = __builtin__.property(_get_lsp_frr_operational_status_active, _set_lsp_frr_operational_status_active)
lsp_frr_down_reason = __builtin__.property(_get_lsp_frr_down_reason, _set_lsp_frr_down_reason)
lsp_frr_computation_mode_default = __builtin__.property(_get_lsp_frr_computation_mode_default, _set_lsp_frr_computation_mode_default)
lsp_frr_computation_mode_use_bypass_metric = __builtin__.property(_get_lsp_frr_computation_mode_use_bypass_metric, _set_lsp_frr_computation_mode_use_bypass_metric)
lsp_frr_computation_mode_use_bypass_liberal = __builtin__.property(_get_lsp_frr_computation_mode_use_bypass_liberal, _set_lsp_frr_computation_mode_use_bypass_liberal)
lsp_frr_group_computation_mode_default = __builtin__.property(_get_lsp_frr_group_computation_mode_default, _set_lsp_frr_group_computation_mode_default)
lsp_frr_group_computation_mode_add_penalty = __builtin__.property(_get_lsp_frr_group_computation_mode_add_penalty, _set_lsp_frr_group_computation_mode_add_penalty)
lsp_frr_group_computation_mode_exclude_groups = __builtin__.property(_get_lsp_frr_group_computation_mode_exclude_groups, _set_lsp_frr_group_computation_mode_exclude_groups)
lsp_frr_group_computation_mode_high_cost = __builtin__.property(_get_lsp_frr_group_computation_mode_high_cost, _set_lsp_frr_group_computation_mode_high_cost)
lsp_frr_out_port_id = __builtin__.property(_get_lsp_frr_out_port_id, _set_lsp_frr_out_port_id)
lsp_frr_out_port_name = __builtin__.property(_get_lsp_frr_out_port_name, _set_lsp_frr_out_port_name)
lsp_frr_out_label = __builtin__.property(_get_lsp_frr_out_label, _set_lsp_frr_out_label)
lsp_frr_path_cost = __builtin__.property(_get_lsp_frr_path_cost, _set_lsp_frr_path_cost)
lsp_frr_bypass_name = __builtin__.property(_get_lsp_frr_bypass_name, _set_lsp_frr_bypass_name)
lsp_frr_forwarding_protected_up = __builtin__.property(_get_lsp_frr_forwarding_protected_up, _set_lsp_frr_forwarding_protected_up)
lsp_frr_secondary_swithover_time = __builtin__.property(_get_lsp_frr_secondary_swithover_time, _set_lsp_frr_secondary_swithover_time)
lsp_frr_hold_time = __builtin__.property(_get_lsp_frr_hold_time, _set_lsp_frr_hold_time)
_pyangbind_elements = {'lsp_frr_operational_status': lsp_frr_operational_status, 'lsp_frr_operational_status_active': lsp_frr_operational_status_active, 'lsp_frr_down_reason': lsp_frr_down_reason, 'lsp_frr_computation_mode_default': lsp_frr_computation_mode_default, 'lsp_frr_computation_mode_use_bypass_metric': lsp_frr_computation_mode_use_bypass_metric, 'lsp_frr_computation_mode_use_bypass_liberal': lsp_frr_computation_mode_use_bypass_liberal, 'lsp_frr_group_computation_mode_default': lsp_frr_group_computation_mode_default, 'lsp_frr_group_computation_mode_add_penalty': lsp_frr_group_computation_mode_add_penalty, 'lsp_frr_group_computation_mode_exclude_groups': lsp_frr_group_computation_mode_exclude_groups, 'lsp_frr_group_computation_mode_high_cost': lsp_frr_group_computation_mode_high_cost, 'lsp_frr_out_port_id': lsp_frr_out_port_id, 'lsp_frr_out_port_name': lsp_frr_out_port_name, 'lsp_frr_out_label': lsp_frr_out_label, 'lsp_frr_path_cost': lsp_frr_path_cost, 'lsp_frr_bypass_name': lsp_frr_bypass_name, 'lsp_frr_forwarding_protected_up': lsp_frr_forwarding_protected_up, 'lsp_frr_secondary_swithover_time': lsp_frr_secondary_swithover_time, 'lsp_frr_hold_time': lsp_frr_hold_time, }
| 79.601266 | 1,200 | 0.783017 | 9,237 | 62,885 | 4.939916 | 0.022518 | 0.073636 | 0.033311 | 0.054 | 0.966513 | 0.946877 | 0.922551 | 0.911812 | 0.886632 | 0.868924 | 0 | 0.006174 | 0.108786 | 62,885 | 789 | 1,201 | 79.702155 | 0.808008 | 0.248231 | 0 | 0.509346 | 0 | 0.042056 | 0.358044 | 0.235556 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133178 | false | 0.086449 | 0.018692 | 0 | 0.261682 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
6afb2c9e07593e1c549e83f9d15de3c52acb9051 | 159 | py | Python | packtml/neural_net/__init__.py | cicorias/supv-ml-py | f7e030206efe5bb2c49433ae18e115ca0fcfc5cb | [
"MIT"
] | 14 | 2018-08-22T22:12:40.000Z | 2021-10-04T16:28:14.000Z | packtml/neural_net/__init__.py | cicorias/supv-ml-py | f7e030206efe5bb2c49433ae18e115ca0fcfc5cb | [
"MIT"
] | null | null | null | packtml/neural_net/__init__.py | cicorias/supv-ml-py | f7e030206efe5bb2c49433ae18e115ca0fcfc5cb | [
"MIT"
] | 14 | 2018-05-31T20:42:12.000Z | 2021-09-15T08:00:14.000Z | # -*- coding: utf-8 -*-
from packtml.neural_net.mlp import *
from packtml.neural_net.transfer import *
__all__ = [s for s in dir() if not s.startswith("_")]
| 22.714286 | 53 | 0.685535 | 25 | 159 | 4.08 | 0.72 | 0.215686 | 0.333333 | 0.392157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007463 | 0.157233 | 159 | 6 | 54 | 26.5 | 0.753731 | 0.132075 | 0 | 0 | 0 | 0 | 0.007353 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ed5f9ff9fb159769b001586790b7181db26253e3 | 25,784 | py | Python | git_devbliss/github/tests/test_init.py | git-devbliss/git-devbliss | c49e774567b64f437d3cfef89f88a3d12ec5029c | [
"Apache-2.0"
] | 2 | 2015-10-28T10:05:53.000Z | 2016-06-07T08:04:54.000Z | git_devbliss/github/tests/test_init.py | git-bliss/git-brancher | c49e774567b64f437d3cfef89f88a3d12ec5029c | [
"Apache-2.0"
] | null | null | null | git_devbliss/github/tests/test_init.py | git-bliss/git-brancher | c49e774567b64f437d3cfef89f88a3d12ec5029c | [
"Apache-2.0"
] | null | null | null | # Copyright 2014 devbliss GmbH
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import unittest.mock
import git_devbliss.github
import sys
from unittest.mock import call
import requests
class GitHubTest(unittest.TestCase):
@unittest.mock.patch("os.path.exists")
def test_init_with_file(self, exists):
exists.return_value = True
with unittest.mock.patch(
'builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
exists.assert_called_with(gh.token_file)
@unittest.mock.patch("builtins.print")
@unittest.mock.patch("getpass.getpass")
@unittest.mock.patch("builtins.input")
@unittest.mock.patch("os.path.exists")
def test_init_interrupt(self, exists, input_function, getpass,
print_function):
exists.return_value = False
input_function.side_effect = KeyboardInterrupt()
with self.assertRaises(SystemExit):
git_devbliss.github.GitHub()
print_function.assert_called_with()
input_function.assert_has_calls([
call('GitHub username: '),
])
@unittest.mock.patch("builtins.print")
@unittest.mock.patch("requests.post")
@unittest.mock.patch("getpass.getpass")
@unittest.mock.patch("builtins.input")
@unittest.mock.patch("os.path.exists")
def test_init_401(self, exists, input_function, getpass, post,
print_function):
exists.return_value = False
input_function.return_value = 'test_username'
getpass.return_value = 'test_pass'
json_function = unittest.mock.Mock()
json_function.return_value = '{"test_json": "blub"}'
post.return_value = unittest.mock.Mock()
post.return_value.json = json_function
post.return_value.status_code = 401
m = unittest.mock.mock_open()
with unittest.mock.patch('__main__.open', m, create=True):
with self.assertRaises(SystemExit):
git_devbliss.github.GitHub()
post.assert_has_calls([
call('https://api.github.com/authorizations',
headers={'User-Agent': 'git-devbliss/ng',
'Content-Type': 'application/json'},
auth=('test_username', 'test_pass'),
data='{"note": "git-devbliss-ng", "scopes": ["repo"]}'),
call().json()
])
input_function.assert_has_calls([
call('GitHub username: '),
])
print_function.assert_called_with('Fatal: Bad credentials',
file=sys.stderr)
@unittest.mock.patch("builtins.print")
@unittest.mock.patch("requests.post")
@unittest.mock.patch("getpass.getpass")
@unittest.mock.patch("builtins.input")
@unittest.mock.patch("os.path.exists")
def test_init_422(self, exists, input_function, getpass, post,
print_function):
exists.return_value = False
input_function.return_value = 'test_username'
getpass.return_value = 'test_pass'
json_function = unittest.mock.Mock()
json_function.return_value = '{"test_json": "blub"}'
post.return_value = unittest.mock.Mock()
post.return_value.json = json_function
post.return_value.status_code = 422
m = unittest.mock.mock_open()
with unittest.mock.patch('__main__.open', m, create=True):
with self.assertRaises(SystemExit):
git_devbliss.github.GitHub()
post.assert_has_calls([
call('https://api.github.com/authorizations',
headers={'User-Agent': 'git-devbliss/ng',
'Content-Type': 'application/json'},
auth=('test_username', 'test_pass'),
data='{"note": "git-devbliss-ng", "scopes": ["repo"]}'),
call().json()
])
input_function.assert_has_calls([
call('GitHub username: '),
])
print_function.assert_has_calls([
call('There is already a token with the name git-devbliss_ng.',
file=sys.stderr),
call('If you are using git-devbliss on another computer, please '
'copy the ~/.github_token found on that machine to this one.',
file=sys.stderr),
call('If not, please log into your github account and delete the'
' old token at https://github.com/settings/applications',
file=sys.stderr)
])
@unittest.mock.patch("builtins.print")
@unittest.mock.patch("requests.post")
@unittest.mock.patch("getpass.getpass")
@unittest.mock.patch("builtins.input")
@unittest.mock.patch("os.path.exists")
def test_init_404(self, exists, input_function, getpass, post,
print_function):
exists.return_value = False
input_function.return_value = 'test_username'
getpass.return_value = 'test_pass'
json_function = unittest.mock.Mock()
json_function.return_value = '{"test_json": "blub"}'
post.return_value = unittest.mock.Mock()
post.return_value.json = json_function
post.return_value.status_code = 404
m = unittest.mock.mock_open()
with unittest.mock.patch('__main__.open', m, create=True):
with self.assertRaises(SystemExit):
git_devbliss.github.GitHub()
post.assert_has_calls([
call('https://api.github.com/authorizations',
headers={'User-Agent': 'git-devbliss/ng',
'Content-Type': 'application/json'},
auth=('test_username', 'test_pass'),
data='{"note": "git-devbliss-ng", "scopes": ["repo"]}'),
call().json()
])
input_function.assert_has_calls([
call('GitHub username: '),
])
print_function.assert_has_calls([
call('Fatal: GitHub returned status 404:', file=sys.stderr),
call('{"test_json": "blub"}', file=sys.stderr)
])
@unittest.mock.patch("builtins.print")
@unittest.mock.patch("requests.post")
@unittest.mock.patch("getpass.getpass")
@unittest.mock.patch("builtins.input")
@unittest.mock.patch("os.path.exists")
def test_init_no_token(self, exists, input_function, getpass, post,
print_function):
exists.return_value = False
input_function.return_value = 'test_username'
getpass.return_value = 'test_pass'
json_function = unittest.mock.Mock()
json_function.return_value = {"token": ""}
post.return_value = unittest.mock.Mock()
post.return_value.json = json_function
post.return_value.status_code = 200
m = unittest.mock.mock_open()
with unittest.mock.patch('__main__.open', m, create=True):
with self.assertRaises(SystemExit):
git_devbliss.github.GitHub()
post.assert_has_calls([
call('https://api.github.com/authorizations',
headers={'User-Agent': 'git-devbliss/ng',
'Content-Type': 'application/json'},
auth=('test_username', 'test_pass'),
data='{"note": "git-devbliss-ng", "scopes": ["repo"]}'),
call().json()
])
input_function.assert_has_calls([
call('GitHub username: '),
])
print_function.assert_has_calls([
call('Fatal: Bad credentials', file=sys.stderr)
])
@unittest.mock.patch("builtins.print")
@unittest.mock.patch("requests.post")
@unittest.mock.patch("getpass.getpass")
@unittest.mock.patch("builtins.input")
@unittest.mock.patch("os.path.exists")
def test_init(self, exists, input_function, getpass, post,
print_function):
exists.return_value = False
input_function.return_value = 'test_username'
getpass.return_value = 'test_pass'
json_function = unittest.mock.Mock()
json_function.return_value = {"token": "test_token"}
post.return_value = unittest.mock.Mock()
post.return_value.json = json_function
post.return_value.status_code = 200
m = unittest.mock.mock_open(read_data='test_token')
with unittest.mock.patch('builtins.open', m, create=True):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
exists.assert_called_with(gh.token_file)
m.assert_has_calls([
call(gh.token_file, 'w'),
call().__enter__(),
call().write('test_token'),
call().__exit__(None, None, None),
call(gh.token_file),
call().__enter__(),
call().read(),
call().__exit__(None, None, None)
])
handle = m()
handle.write.assert_called_once_with('test_token')
post.assert_has_calls([
call('https://api.github.com/authorizations',
headers={'Content-Type': 'application/json',
'User-Agent': 'git-devbliss/ng'},
auth=('test_username', 'test_pass'),
data='{"note": "git-devbliss-ng", "scopes": ["repo"]}'),
call().json()
])
self.assertEqual(print_function.call_count, 0)
@unittest.mock.patch("builtins.print")
@unittest.mock.patch("requests.post")
@unittest.mock.patch("getpass.getpass")
@unittest.mock.patch("builtins.input")
@unittest.mock.patch("os.path.exists")
def test_init_two_factor(self, exists, input_function, getpass, post,
print_function):
exists.return_value = False
input_function.side_effect = ['test_username', 'two_factor_code']
getpass.return_value = 'test_pass'
json_function1 = unittest.mock.Mock()
json_function2 = unittest.mock.Mock()
json_function1.return_value = {
"documentation_url": "https://developer.github.com/v3/auth"
"#working-with-two-factor-authentication",
"message": "Must specify two-factor authentication OTP code."}
json_function2.return_value = {'token': 'test_token'}
post1 = unittest.mock.Mock()
post2 = unittest.mock.Mock()
post.side_effect = [post1, post2]
post1.json = json_function1
post2.json = json_function2
post1.status_code = 401
post2.status_code = 201
m = unittest.mock.mock_open(read_data='test_token')
with unittest.mock.patch('builtins.open', m, create=True):
gh = git_devbliss.github.GitHub()
m.assert_has_calls([
call(gh.token_file, 'w'),
call().__enter__(),
call().write('test_token'),
call().__exit__(None, None, None),
call(gh.token_file),
call().__enter__(),
call().read(),
call().__exit__(None, None, None)
])
handle = m()
handle.write.assert_called_once_with('test_token')
post.assert_has_calls([
call('https://api.github.com/authorizations',
data='{"note": "git-devbliss-ng", "scopes": ["repo"]}',
headers={'User-Agent': 'git-devbliss/ng',
'Content-Type': 'application/json'},
auth=('test_username', 'test_pass')),
call('https://api.github.com/authorizations',
data='{"note": "git-devbliss-ng", "scopes": ["repo"]}',
headers={'User-Agent': 'git-devbliss/ng',
'X-GitHub-OTP': 'two_factor_code',
'Content-Type': 'application/json'},
auth=('test_username', 'test_pass'))
])
input_function.assert_has_calls([
call('GitHub username: '),
call('Please input your two_factor code: ')
])
self.assertEqual(print_function.call_count, 0)
@unittest.mock.patch("requests.request")
@unittest.mock.patch("os.path.exists")
def test_request_400(self, exists, request):
exists.return_value = True
request.return_value = unittest.mock.Mock()
request.return_value.status_code = 400
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
with self.assertRaises(requests.exceptions.RequestException):
gh._request('test_method', 'test_path',
'test_body', 'test_host')
exists.assert_called_with(gh.token_file)
request.assert_has_calls([
call('test_method', 'test_hosttest_path',
headers={'User-Agent': 'git-devbliss/ng',
'Content-Type': 'application/json',
'Authorization': 'bearer test_token'},
data='test_body'),
call().json()
])
@unittest.mock.patch("git_devbliss.github.GitHub._interactive_login")
@unittest.mock.patch("requests.request")
@unittest.mock.patch("os.path.exists")
def test_request_401(self, exists, request, login):
exists.return_value = True
mock_401 = unittest.mock.Mock()
mock_401.status_code = 401
mock_200 = unittest.mock.Mock()
mock_200.status_code = 200
request.side_effect = [mock_401, mock_200]
login.return_value = 'test_token'
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh._request('test_method', 'test_path',
'test_body', 'test_host')
request.assert_has_calls([
call('test_method', 'test_hosttest_path',
headers={'Content-Type': 'application/json',
'User-Agent': 'git-devbliss/ng',
'Authorization': 'bearer test_token'},
data='test_body'),
call('test_method', 'test_hosttest_path',
headers={'Content-Type': 'application/json',
'User-Agent': 'git-devbliss/ng',
'Authorization': 'bearer test_token'},
data='test_body'),
])
exists.assert_called_with(gh.token_file)
@unittest.mock.patch("requests.request")
@unittest.mock.patch("os.path.exists")
def test_request_301(self, exists, request):
exists.return_value = True
mock_301 = unittest.mock.Mock()
mock_301.status_code = 301
mock_301.headers = {'location': 'test_location'}
mock_200 = unittest.mock.Mock()
mock_200.status_code = 200
request.side_effect = [mock_301, mock_200]
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh._request('test_method', 'test_path',
'test_body', 'test_host')
request.assert_has_calls([
call('test_method', 'test_hosttest_path', headers={
'Content-Type': 'application/json',
'Authorization': 'bearer test_token',
'User-Agent': 'git-devbliss/ng'},
data='test_body'),
call('test_method', 'test_hosttest_location', headers={
'Content-Type': 'application/json',
'Authorization': 'bearer test_token',
'User-Agent': 'git-devbliss/ng'},
data='test_body'),
])
exists.assert_called_with(gh.token_file)
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_pulls(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.pulls('test_user', 'test_repo')
request.assert_called_once_with(
'GET', '/repos/test_user/test_repo/pulls')
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_issues(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.issues('test_user', 'test_repo')
request.assert_called_once_with(
'GET', '/repos/test_user/test_repo/issues')
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_issue(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.issue('test_user', 'test_repo', 'test_title', 'test_body')
request.assert_called_once_with(
'POST', '/repos/test_user/test_repo/issues',
'{"body": "test_body", "title": "test_title"}')
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_branches(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.branches('test_user', 'test_repo')
request.assert_called_once_with(
'GET', '/repos/test_user/test_repo/branches')
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_tags(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.tags('test_user', 'test_repo')
request.assert_called_once_with(
'GET', '/repos/test_user/test_repo/tags')
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_orgs(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.orgs('test_org')
request.assert_called_once_with(
'GET', '/orgs/test_org')
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_events(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.events('test_org')
request.assert_called_once_with(
'GET', '/orgs/test_org/events')
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_repos(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.repos('test_org')
request.assert_called_once_with(
'GET', '/orgs/test_org/repos?per_page=500')
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_pull_request(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.pull_request('test_user', 'test_repo', 'test_head',
'test_base', 'test_title', 'test_body')
request.assert_called_once_with(
'POST', '/repos/test_user/test_repo/pulls',
'{"base": "test_base", "body": "test_body", '
'"head": "test_head", "title": "test_title"}')
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_get_pull_request(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.get_pull_request('test_user', 'test_repo', 333)
request.assert_called_once_with(
'GET', '/repos/test_user/test_repo/pulls/333')
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_merge_button(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.merge_button('test_user', 'test_repo', 333)
request.assert_called_once_with(
'PUT', '/repos/test_user/test_repo/pulls/333/merge', '{}')
@unittest.mock.patch("git_devbliss.github.GitHub._request")
@unittest.mock.patch("os.path.exists")
def test_update_pull_request(self, exists, request):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.update_pull_request('test_user', 'test_repo', 333, 'test_body')
request.assert_called_once_with(
'PATCH', '/repos/test_user/test_repo/pulls/333', '"test_body"')
@unittest.mock.patch("subprocess.check_output")
@unittest.mock.patch("os.path.exists")
def test_get_current_repo(self, exists, check_output):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
check_output.return_value = b'git@github.com:test_user/test_repo'
gh.get_current_repo()
check_output.assert_called_once_with(
'git remote -v', shell=True)
@unittest.mock.patch("subprocess.check_output")
@unittest.mock.patch("os.path.exists")
def test_get_current_branch(self, exists, check_output):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
gh.get_current_branch()
check_output.assert_called_once_with(
'git rev-parse --abbrev-ref HEAD', shell=True)
@unittest.mock.patch("subprocess.check_output")
@unittest.mock.patch("os.path.exists")
def test_get_current_repo_error(self, exists, check_output):
exists.return_value = True
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(
read_data='test_token')):
gh = git_devbliss.github.GitHub()
self.assertEqual(gh.token, 'test_token')
check_output.return_value = b''
with self.assertRaises(ValueError):
gh.get_current_repo()
check_output.assert_called_once_with(
'git remote -v', shell=True)
| 43.701695 | 79 | 0.594128 | 2,887 | 25,784 | 5.07274 | 0.083131 | 0.117173 | 0.112598 | 0.06125 | 0.86282 | 0.848344 | 0.843291 | 0.821441 | 0.805394 | 0.799044 | 0 | 0.007611 | 0.276451 | 25,784 | 589 | 80 | 43.775891 | 0.777391 | 0.021292 | 0 | 0.775 | 0 | 0 | 0.230403 | 0.039372 | 0 | 0 | 0 | 0 | 0.140385 | 1 | 0.05 | false | 0.051923 | 0.011538 | 0 | 0.063462 | 0.040385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
9c0406b25213df7128abe5fc3108144a89f80d42 | 221 | py | Python | analysistools/__init__.py | DSPsleeporg/an_spindle | bebe90434628b8d50a2a7fcf5fb131fc5108a623 | [
"BSD-3-Clause"
] | null | null | null | analysistools/__init__.py | DSPsleeporg/an_spindle | bebe90434628b8d50a2a7fcf5fb131fc5108a623 | [
"BSD-3-Clause"
] | null | null | null | analysistools/__init__.py | DSPsleeporg/an_spindle | bebe90434628b8d50a2a7fcf5fb131fc5108a623 | [
"BSD-3-Clause"
] | null | null | null | import analysistools.tools
import analysistools.norm_fre_mp
import analysistools.param_hist
import analysistools.param_rep
import analysistools.current
import analysistools.spike_freq_adap
import analysistools.bifurcation | 31.571429 | 36 | 0.909502 | 27 | 221 | 7.222222 | 0.518519 | 0.682051 | 0.246154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 221 | 7 | 37 | 31.571429 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9c1093ff7c76d448baeeffa91de1545936dbb77a | 4,137 | py | Python | test/feature/test_nms.py | sampepose/kornianodeps | 384edd02a6bcfb76bf1abdb2a2029e89e3719bb8 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | test/feature/test_nms.py | sampepose/kornianodeps | 384edd02a6bcfb76bf1abdb2a2029e89e3719bb8 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | test/feature/test_nms.py | sampepose/kornianodeps | 384edd02a6bcfb76bf1abdb2a2029e89e3719bb8 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | import pytest
import kornia as kornia
import kornia.testing as utils # test utils
from test.common import device
import torch
from torch.testing import assert_allclose
from torch.autograd import gradcheck
class TestNMS2d:
def test_shape(self, device):
inp = torch.ones(1, 3, 4, 4, device=device)
nms = kornia.feature.NonMaximaSuppression2d((3, 3)).to(device)
assert nms(inp).shape == inp.shape
def test_shape_batch(self, device):
inp = torch.ones(4, 3, 4, 4, device=device)
nms = kornia.feature.NonMaximaSuppression2d((3, 3)).to(device)
assert nms(inp).shape == inp.shape
def test_nms(self, device):
inp = torch.tensor([[[
[0., 0., 0., 0., 0., 0., 0.],
[0., 0.1, 1., 0., 1., 1., 0.],
[0., 0.7, 1.1, 0., 1., 1., 0.],
[0., 0.8, 1., 0., 1., 1., 0.],
]]], device=device).float()
expected = torch.tensor([[[
[0., 0., 0., 0., 0., 0., 0.],
[0., 0, 0, 0., 1, 1., 0.],
[0., 0, 1.1, 0., 1., 1., 0.],
[0., 0, 0, 0., 1., 1., 0.],
]]], device=device).float()
nms = kornia.feature.NonMaximaSuppression2d((3, 3)).to(device)
scores = nms(inp)
assert_allclose(scores, expected, atol=1e-4, rtol=1e-3)
def test_gradcheck(self, device):
k = 0.04
batch_size, channels, height, width = 1, 2, 5, 4
img = torch.rand(batch_size, channels, height, width, device=device)
img = utils.tensor_to_gradcheck_var(img) # to var
assert gradcheck(kornia.feature.nms2d, (img, (3, 3)),
raise_exception=True, nondet_tol=1e-4)
class TestNMS3d:
def test_shape(self, device):
inp = torch.ones(1, 1, 3, 4, 4, device=device)
nms = kornia.feature.NonMaximaSuppression3d((3, 3, 3)).to(device)
assert nms(inp).shape == inp.shape
def test_shape_batch(self, device):
inp = torch.ones(4, 1, 3, 4, 4, device=device)
nms = kornia.feature.NonMaximaSuppression3d((3, 3, 3)).to(device)
assert nms(inp).shape == inp.shape
def test_nms(self, device):
inp = torch.tensor([[[
[[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 1., 2., 1., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]],
]]]).to(device)
expected = torch.tensor([[[[[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 2., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]]]]])
nms = kornia.feature.NonMaximaSuppression3d((3, 3, 3)).to(device)
scores = nms(inp)
assert_allclose(scores, expected, atol=1e-4, rtol=1e-3)
def test_gradcheck(self, device):
batch_size, channels, depth, height, width = 1, 1, 4, 5, 4
img = torch.rand(batch_size, channels, depth, height, width, device=device)
img = utils.tensor_to_gradcheck_var(img) # to var
assert gradcheck(kornia.feature.nms3d, (img, (3, 3, 3)),
raise_exception=True, nondet_tol=1e-4)
| 39.4 | 83 | 0.420836 | 540 | 4,137 | 3.174074 | 0.112963 | 0.189032 | 0.259043 | 0.312719 | 0.861144 | 0.843057 | 0.820887 | 0.798716 | 0.732789 | 0.631855 | 0 | 0.110504 | 0.380952 | 4,137 | 104 | 84 | 39.778846 | 0.558766 | 0.005801 | 0 | 0.644444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.088889 | false | 0 | 0.077778 | 0 | 0.188889 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9c2f21e3a94c055945a86f03f3609491b5b19126 | 520 | py | Python | eval_mosmed_timm-regnetx_002_RandomCrop.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | eval_mosmed_timm-regnetx_002_RandomCrop.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | eval_mosmed_timm-regnetx_002_RandomCrop.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | import os
ls=["python main.py --configs configs/eval_mosmed_unetplusplus_timm-regnetx_002_0_RandomCrop.yml",
"python main.py --configs configs/eval_mosmed_unetplusplus_timm-regnetx_002_1_RandomCrop.yml",
"python main.py --configs configs/eval_mosmed_unetplusplus_timm-regnetx_002_2_RandomCrop.yml",
"python main.py --configs configs/eval_mosmed_unetplusplus_timm-regnetx_002_3_RandomCrop.yml",
"python main.py --configs configs/eval_mosmed_unetplusplus_timm-regnetx_002_4_RandomCrop.yml",
]
for l in ls:
os.system(l) | 47.272727 | 98 | 0.840385 | 80 | 520 | 5.0875 | 0.3 | 0.12285 | 0.14742 | 0.233415 | 0.889435 | 0.889435 | 0.889435 | 0.889435 | 0.889435 | 0.889435 | 0 | 0.0409 | 0.059615 | 520 | 11 | 99 | 47.272727 | 0.791411 | 0 | 0 | 0 | 0 | 0 | 0.873321 | 0.633397 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
9c4b20be6892c1e9aab675fd53aba5f581270747 | 3,318 | py | Python | test/pyaz/network/local_gateway/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | null | null | null | test/pyaz/network/local_gateway/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | 9 | 2021-09-24T16:37:24.000Z | 2021-12-24T00:39:19.000Z | test/pyaz/network/local_gateway/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | null | null | null | import json, subprocess
from ... pyaz_utils import get_cli_name, get_params
def delete(resource_group, name, no_wait=None):
params = get_params(locals())
command = "az network local-gateway delete " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def show(resource_group, name):
params = get_params(locals())
command = "az network local-gateway show " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def list(resource_group):
params = get_params(locals())
command = "az network local-gateway list " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def create(resource_group, name, gateway_ip_address, location=None, tags=None, local_address_prefixes=None, asn=None, bgp_peering_address=None, peer_weight=None, no_wait=None):
params = get_params(locals())
command = "az network local-gateway create " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def update(resource_group, name, gateway_ip_address=None, local_address_prefixes=None, asn=None, bgp_peering_address=None, peer_weight=None, tags=None, set=None, add=None, remove=None, force_string=None, no_wait=None):
params = get_params(locals())
command = "az network local-gateway update " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def wait(resource_group, name, timeout=None, interval=None, deleted=None, created=None, updated=None, exists=None, custom=None):
params = get_params(locals())
command = "az network local-gateway wait " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
| 37.704545 | 218 | 0.677215 | 418 | 3,318 | 5.294258 | 0.157895 | 0.075915 | 0.054225 | 0.056936 | 0.857207 | 0.857207 | 0.827384 | 0.827384 | 0.827384 | 0.7831 | 0 | 0.004556 | 0.206148 | 3,318 | 87 | 219 | 38.137931 | 0.835611 | 0 | 0 | 0.825 | 0 | 0 | 0.074141 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0 | 0.025 | 0 | 0.175 | 0.225 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9c56b23fe04492b07d694888b8978a768b1aafa3 | 9,881 | py | Python | model/dlink_jittor.py | THU-CVlab/JMedSeg | 1c9c66a1b2c6e4c5e3f70ca9e1ed54447b944755 | [
"MIT"
] | 26 | 2021-08-19T05:22:44.000Z | 2022-03-08T05:44:43.000Z | model/dlink_jittor.py | Jittor/JMedSeg | 1c9c66a1b2c6e4c5e3f70ca9e1ed54447b944755 | [
"MIT"
] | null | null | null | model/dlink_jittor.py | Jittor/JMedSeg | 1c9c66a1b2c6e4c5e3f70ca9e1ed54447b944755 | [
"MIT"
] | 3 | 2021-08-19T06:12:49.000Z | 2021-08-19T11:41:16.000Z | import jittor as jt
from jittor import init
from jittor import nn
from jittor import models
from functools import partial
nonlinearity = partial(nn.relu)
class Dblock_more_dilate(nn.Module):
def __init__(self, channel):
super(Dblock_more_dilate, self).__init__()
self.dilate1 = nn.Conv(channel, channel, 3, dilation=1, padding=1)
self.dilate2 = nn.Conv(channel, channel, 3, dilation=2, padding=2)
self.dilate3 = nn.Conv(channel, channel, 3, dilation=4, padding=4)
self.dilate4 = nn.Conv(channel, channel, 3, dilation=8, padding=8)
self.dilate5 = nn.Conv(channel, channel, 3, dilation=16, padding=16)
for m in self.modules():
if (isinstance(m, nn.Conv) or isinstance(m, nn.ConvTranspose)):
if (m.bias is not None):
m.bias.data = jt.zeros(len(m.bias.data))
def execute(self, x):
dilate1_out = nonlinearity(self.dilate1(x))
dilate2_out = nonlinearity(self.dilate2(dilate1_out))
dilate3_out = nonlinearity(self.dilate3(dilate2_out))
dilate4_out = nonlinearity(self.dilate4(dilate3_out))
dilate5_out = nonlinearity(self.dilate5(dilate4_out))
out = (((((x + dilate1_out) + dilate2_out) + dilate3_out) + dilate4_out) + dilate5_out)
return out
class Dblock(nn.Module):
def __init__(self, channel):
super(Dblock, self).__init__()
self.dilate1 = nn.Conv(channel, channel, 3, dilation=1, padding=1)
self.dilate2 = nn.Conv(channel, channel, 3, dilation=2, padding=2)
self.dilate3 = nn.Conv(channel, channel, 3, dilation=4, padding=4)
self.dilate4 = nn.Conv(channel, channel, 3, dilation=8, padding=8)
for m in self.modules():
if (isinstance(m, nn.Conv) or isinstance(m, nn.ConvTranspose)):
if (m.bias is not None):
m.bias.data = jt.zeros(len(m.bias.data))
def execute(self, x):
dilate1_out = nonlinearity(self.dilate1(x))
dilate2_out = nonlinearity(self.dilate2(dilate1_out))
dilate3_out = nonlinearity(self.dilate3(dilate2_out))
dilate4_out = nonlinearity(self.dilate4(dilate3_out))
out = ((((x + dilate1_out) + dilate2_out) + dilate3_out) + dilate4_out)
return out
class DecoderBlock(nn.Module):
def __init__(self, in_channels, n_filters):
super(DecoderBlock, self).__init__()
self.conv1 = nn.Conv(in_channels, (in_channels // 4), 1)
self.norm1 = nn.BatchNorm((in_channels // 4))
self.relu1 = nonlinearity
self.deconv2 = nn.ConvTranspose((in_channels // 4), (in_channels // 4), 3, stride=2, padding=1, output_padding=1)
self.norm2 = nn.BatchNorm((in_channels // 4))
self.relu2 = nonlinearity
self.conv3 = nn.Conv((in_channels // 4), n_filters, 1)
self.norm3 = nn.BatchNorm(n_filters)
self.relu3 = nonlinearity
def execute(self, x):
x = self.conv1(x)
x = self.norm1(x)
x = self.relu1(x)
x = self.deconv2(x)
x = self.norm2(x)
x = self.relu2(x)
x = self.conv3(x)
x = self.norm3(x)
x = self.relu3(x)
return x
class DinkNet34_less_pool(nn.Module):
def __init__(self, num_classes=1):
super(DinkNet34_more_dilate, self).__init__()
filters = [64, 128, 256, 512]
resnet = models.resnet34(pretrained=True)
self.firstconv = resnet.conv1
self.firstbn = resnet.bn1
self.firstrelu = resnet.relu
self.firstmaxpool = resnet.maxpool
self.encoder1 = resnet.layer1
self.encoder2 = resnet.layer2
self.encoder3 = resnet.layer3
self.dblock = Dblock_more_dilate(256)
self.decoder3 = DecoderBlock(filters[2], filters[1])
self.decoder2 = DecoderBlock(filters[1], filters[0])
self.decoder1 = DecoderBlock(filters[0], filters[0])
self.finaldeconv1 = nn.ConvTranspose(filters[0], 32, 4, stride=2, padding=1)
self.finalrelu1 = nonlinearity
self.finalconv2 = nn.Conv(32, 32, 3, padding=1)
self.finalrelu2 = nonlinearity
self.finalconv3 = nn.Conv(32, num_classes, 3, padding=1)
def execute(self, x):
x = self.firstconv(x)
x = self.firstbn(x)
x = self.firstrelu(x)
x = self.firstmaxpool(x)
e1 = self.encoder1(x)
e2 = self.encoder2(e1)
e3 = self.encoder3(e2)
e3 = self.dblock(e3)
d3 = (self.decoder3(e3) + e2)
d2 = (self.decoder2(d3) + e1)
d1 = self.decoder1(d2)
out = self.finaldeconv1(d1)
out = self.finalrelu1(out)
out = self.finalconv2(out)
out = self.finalrelu2(out)
out = self.finalconv3(out)
return F.sigmoid(out)
class DinkNet34(nn.Module):
def __init__(self, num_classes=1, num_channels=3):
super(DinkNet34, self).__init__()
filters = [64, 128, 256, 512]
resnet = models.resnet34(pretrained=True)
self.firstconv = resnet.conv1
self.firstbn = resnet.bn1
self.firstrelu = resnet.relu
self.firstmaxpool = resnet.maxpool
self.encoder1 = resnet.layer1
self.encoder2 = resnet.layer2
self.encoder3 = resnet.layer3
self.encoder4 = resnet.layer4
self.dblock = Dblock(512)
self.decoder4 = DecoderBlock(filters[3], filters[2])
self.decoder3 = DecoderBlock(filters[2], filters[1])
self.decoder2 = DecoderBlock(filters[1], filters[0])
self.decoder1 = DecoderBlock(filters[0], filters[0])
self.finaldeconv1 = nn.ConvTranspose(filters[0], 32, 4, stride=2, padding=1)
self.finalrelu1 = nonlinearity
self.finalconv2 = nn.Conv(32, 32, 3, padding=1)
self.finalrelu2 = nonlinearity
self.finalconv3 = nn.Conv(32, num_classes, 3, padding=1)
def execute(self, x):
x = self.firstconv(x)
x = self.firstbn(x)
x = self.firstrelu(x)
x = self.firstmaxpool(x)
e1 = self.encoder1(x)
e2 = self.encoder2(e1)
e3 = self.encoder3(e2)
e4 = self.encoder4(e3)
e4 = self.dblock(e4)
d4 = (self.decoder4(e4) + e3)
d3 = (self.decoder3(d4) + e2)
d2 = (self.decoder2(d3) + e1)
d1 = self.decoder1(d2)
out = self.finaldeconv1(d1)
out = self.finalrelu1(out)
out = self.finalconv2(out)
out = self.finalrelu2(out)
out = self.finalconv3(out)
return F.sigmoid(out)
class LinkNet34(nn.Module):
def __init__(self, num_classes=1):
super(LinkNet34, self).__init__()
filters = [64, 128, 256, 512]
resnet = models.resnet34(pretrained=True)
self.firstconv = resnet.conv1
self.firstbn = resnet.bn1
self.firstrelu = resnet.relu
self.firstmaxpool = resnet.maxpool
self.encoder1 = resnet.layer1
self.encoder2 = resnet.layer2
self.encoder3 = resnet.layer3
self.encoder4 = resnet.layer4
self.decoder4 = DecoderBlock(filters[3], filters[2])
self.decoder3 = DecoderBlock(filters[2], filters[1])
self.decoder2 = DecoderBlock(filters[1], filters[0])
self.decoder1 = DecoderBlock(filters[0], filters[0])
self.finaldeconv1 = nn.ConvTranspose(filters[0], 32, 3, stride=2)
self.finalrelu1 = nonlinearity
self.finalconv2 = nn.Conv(32, 32, 3)
self.finalrelu2 = nonlinearity
self.finalconv3 = nn.Conv(32, num_classes, 2, padding=1)
def execute(self, x):
x = self.firstconv(x)
x = self.firstbn(x)
x = self.firstrelu(x)
x = self.firstmaxpool(x)
e1 = self.encoder1(x)
e2 = self.encoder2(e1)
e3 = self.encoder3(e2)
e4 = self.encoder4(e3)
d4 = (self.decoder4(e4) + e3)
d3 = (self.decoder3(d4) + e2)
d2 = (self.decoder2(d3) + e1)
d1 = self.decoder1(d2)
out = self.finaldeconv1(d1)
out = self.finalrelu1(out)
out = self.finalconv2(out)
out = self.finalrelu2(out)
out = self.finalconv3(out)
return jt.sigmoid(out)
class DinkNet50(nn.Module):
def __init__(self, num_classes=1):
super(DinkNet50, self).__init__()
filters = [256, 512, 1024, 2048]
resnet = models.resnet50(pretrained=True)
self.firstconv = resnet.conv1
self.firstbn = resnet.bn1
self.firstrelu = resnet.relu
self.firstmaxpool = resnet.maxpool
self.encoder1 = resnet.layer1
self.encoder2 = resnet.layer2
self.encoder3 = resnet.layer3
self.encoder4 = resnet.layer4
self.dblock = Dblock_more_dilate(2048)
self.decoder4 = DecoderBlock(filters[3], filters[2])
self.decoder3 = DecoderBlock(filters[2], filters[1])
self.decoder2 = DecoderBlock(filters[1], filters[0])
self.decoder1 = DecoderBlock(filters[0], filters[0])
self.finaldeconv1 = nn.ConvTranspose(filters[0], 32, 4, stride=2, padding=1)
self.finalrelu1 = nonlinearity
self.finalconv2 = nn.Conv(32, 32, 3, padding=1)
self.finalrelu2 = nonlinearity
self.finalconv3 = nn.Conv(32, num_classes, 3, padding=1)
def execute(self, x):
x = self.firstconv(x)
x = self.firstbn(x)
x = self.firstrelu(x)
x = self.firstmaxpool(x)
e1 = self.encoder1(x)
e2 = self.encoder2(e1)
e3 = self.encoder3(e2)
e4 = self.encoder4(e3)
e4 = self.dblock(e4)
d4 = (self.decoder4(e4) + e3)
d3 = (self.decoder3(d4) + e2)
d2 = (self.decoder2(d3) + e1)
d1 = self.decoder1(d2)
out = self.finaldeconv1(d1)
out = self.finalrelu1(out)
out = self.finalconv2(out)
out = self.finalrelu2(out)
out = self.finalconv3(out)
return jt.sigmoid(out)
| 38.447471 | 121 | 0.608542 | 1,241 | 9,881 | 4.753425 | 0.099114 | 0.008476 | 0.025428 | 0.030514 | 0.849466 | 0.84082 | 0.823699 | 0.823699 | 0.806069 | 0.788269 | 0 | 0.066308 | 0.267382 | 9,881 | 256 | 122 | 38.597656 | 0.748584 | 0 | 0 | 0.763949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060086 | false | 0 | 0.021459 | 0 | 0.141631 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
92fd2077c1cc37dcdf341247592c601231e36c15 | 4,411 | py | Python | tests/test_cli.py | rs-kellogg/edgar2data | 1c382e4ae36fa2ed7f240aa5b7d16dd154cbfb2b | [
"MIT"
] | null | null | null | tests/test_cli.py | rs-kellogg/edgar2data | 1c382e4ae36fa2ed7f240aa5b7d16dd154cbfb2b | [
"MIT"
] | null | null | null | tests/test_cli.py | rs-kellogg/edgar2data | 1c382e4ae36fa2ed7f240aa5b7d16dd154cbfb2b | [
"MIT"
] | 1 | 2021-02-16T13:43:18.000Z | 2021-02-16T13:43:18.000Z | """
Copyright (c) 2021 Northwestern University. All rights reserved.
This work is licensed under the terms of the MIT license.
For a copy, see <https://opensource.org/licenses/MIT>.
"""
import tempfile
from typer.testing import CliRunner
from edgar.cli import *
runner = CliRunner()
dir_path = os.path.dirname(os.path.realpath(__file__))
def count_lines(text):
matches = re.compile(r"^\".+?\.txt\"", re.MULTILINE).findall(text)
return len(matches)
def test_script_on_form3_collection(test_form3_collection, tmpdir):
"""
Test script on a random sample of Form 3 documents
:param test_form3_collection:
:return:
"""
result = runner.invoke(
app,
["process", str(test_form3_collection), "--out_dir", str(tmpdir)],
)
assert result.exit_code == 0
assert "processing files in dir" in result.stdout
assert "generating output in dir" in result.stdout
assert result.stdout.count("processing file:") == 100
assert (Path(tmpdir) / "document_info.csv").exists()
assert count_lines((Path(tmpdir) / "document_info.csv").read_text()) == 100
assert (Path(tmpdir) / "footnotes.csv").exists()
assert count_lines((Path(tmpdir) / "footnotes.csv").read_text()) == 500
assert (Path(tmpdir) / "derivatives.csv").exists()
assert count_lines((Path(tmpdir) / "derivatives.csv").read_text()) == 128
assert (Path(tmpdir) / "nonderivatives.csv").exists()
assert count_lines((Path(tmpdir) / "nonderivatives.csv").read_text()) == 54
assert (Path(tmpdir) / "report_owners.csv").exists()
assert count_lines((Path(tmpdir) / "report_owners.csv").read_text()) == 178
assert (Path(tmpdir) / "signatures.csv").exists()
assert count_lines((Path(tmpdir) / "signatures.csv").read_text()) == 168
def test_script_on_form4_collection(test_form4_collection, tmpdir):
"""
Test script on a random sample of Form 4 documents
:param test_form4_collection:
:return:
"""
result = runner.invoke(
app,
["process", str(test_form4_collection), "--out_dir", str(tmpdir)],
)
assert result.exit_code == 0
assert "processing files in dir" in result.stdout
assert "generating output in dir" in result.stdout
assert result.stdout.count("processing file:") == 100
assert (Path(tmpdir) / "document_info.csv").exists()
assert count_lines((Path(tmpdir) / "document_info.csv").read_text()) == 100
assert (Path(tmpdir) / "footnotes.csv").exists()
assert count_lines((Path(tmpdir) / "footnotes.csv").read_text()) == 417
assert (Path(tmpdir) / "derivatives.csv").exists()
assert count_lines((Path(tmpdir) / "derivatives.csv").read_text()) == 81
assert (Path(tmpdir) / "nonderivatives.csv").exists()
assert count_lines((Path(tmpdir) / "nonderivatives.csv").read_text()) == 182
assert (Path(tmpdir) / "report_owners.csv").exists()
assert count_lines((Path(tmpdir) / "report_owners.csv").read_text()) == 129
assert (Path(tmpdir) / "signatures.csv").exists()
assert count_lines((Path(tmpdir) / "signatures.csv").read_text()) == 123
def test_script_on_form5_collection(test_form5_collection, tmpdir):
"""
Test script on a random sample of Form 5 documents
:param test_form5_collection:
:return:
"""
result = runner.invoke(
app,
["process", str(test_form5_collection), "--out_dir", str(tmpdir)],
)
assert result.exit_code == 0
assert "processing files in dir" in result.stdout
assert "generating output in dir" in result.stdout
assert result.stdout.count("processing file:") == 100
assert (Path(tmpdir) / "document_info.csv").exists()
assert count_lines((Path(tmpdir) / "document_info.csv").read_text()) == 100
assert (Path(tmpdir) / "footnotes.csv").exists()
assert count_lines((Path(tmpdir) / "footnotes.csv").read_text()) == 546
assert (Path(tmpdir) / "derivatives.csv").exists()
assert count_lines((Path(tmpdir) / "derivatives.csv").read_text()) == 138
assert (Path(tmpdir) / "nonderivatives.csv").exists()
assert count_lines((Path(tmpdir) / "nonderivatives.csv").read_text()) == 286
assert (Path(tmpdir) / "report_owners.csv").exists()
assert count_lines((Path(tmpdir) / "report_owners.csv").read_text()) == 102
assert (Path(tmpdir) / "signatures.csv").exists()
assert count_lines((Path(tmpdir) / "signatures.csv").read_text()) == 102
| 41.224299 | 80 | 0.678984 | 564 | 4,411 | 5.150709 | 0.198582 | 0.123924 | 0.099139 | 0.123924 | 0.801377 | 0.801377 | 0.801377 | 0.801377 | 0.801377 | 0.748709 | 0 | 0.02253 | 0.164815 | 4,411 | 106 | 81 | 41.613208 | 0.766015 | 0.101564 | 0 | 0.549296 | 0 | 0 | 0.206639 | 0 | 0 | 0 | 0 | 0 | 0.676056 | 1 | 0.056338 | false | 0 | 0.042254 | 0 | 0.112676 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
13244bcdd2e5648ec3d0b5f4aefa32864d019338 | 5,454 | py | Python | CLT_Visualizer.py | jpozin/Math-Projects | 2baf08f1b2595e2231dc2228af251638558e86b8 | [
"MIT"
] | null | null | null | CLT_Visualizer.py | jpozin/Math-Projects | 2baf08f1b2595e2231dc2228af251638558e86b8 | [
"MIT"
] | null | null | null | CLT_Visualizer.py | jpozin/Math-Projects | 2baf08f1b2595e2231dc2228af251638558e86b8 | [
"MIT"
] | null | null | null | # This module is intended to illustrate the central limit theorem (sum and average versions) as it applies to several common distributions
# It is intended to be instructional
import matplotlib.pyplot as plt
from statistics import mean, pstdev
from RandomNumbersWithDistributions import *
from sys import argv
def HistSum(dist, n, tot):
"""Generate tot random variables Z_j = sum(X_1, X_2, ..., X_n)
Where X_i is an iid random variable with distribution dist for all i
Z_j will be approximately normal(mu=n*mean(X_1), sqrt(n)*stdev(X_1)) for all 1 <= j <= tot
The histogram plot of all Z_j's will be displayed to help the student visualize the Central Limit Theorem (sum case)"""
dist = dist.lower()
Z_vals = []
if dist in ('unif', 'uniform'):
a = float(input("Enter a value for a (lower bound): "))
b = float(input("Enter a value for b (upper bound): "))
for i in range(tot):
Z_vals.append(sum([UnifNum(a, b) for _ in range(n)]))
if dist in ('expo', 'exponential'):
rate = float(input("Enter a value for λ (rate): "))
for i in range(tot):
Z_vals.append(sum([ExpoNum(rate) for _ in range(n)]))
if dist in ('bin', 'bino', 'binomial'):
n_ = int(input("Enter a value for n (number of trials per binomial RV): "))
p = float(input("Enter a value for p (success probability): "))
for i in range(tot):
Z_vals.append(sum([BinomialNum(n_, p) for _ in range(n)]))
if dist in ('geo', 'geom', 'geometric'):
p = float(input("Enter a value for p (success probability): "))
for i in range(tot):
Z_vals.append(sum([GeometricNum(p) for _ in range(n)]))
if dist in ('discrete uniform', 'discunif', 'disc unif', 'dunif'):
a = int(input("Enter a value for a (lower bound): "))
b = int(input("Enter a value for b (upper bound): "))
for i in range(tot):
Z_vals.append(sum([DiscUnifNum2(a, b) for _ in range(n)]))
if dist in ('pareto',):
alpha = float(input("Enter a value for α (scale parameter): "))
for i in range(tot):
Z_vals.append(sum([ParetoNum(alpha) for _ in range(n)]))
if dist in ('weibull', 'wb', 'wbl'):
alpha = float(input("Enter a value for α (shape parameter): "))
lambd = float(input("Enter a value for λ (scale parameter): "))
for i in range(tot):
Z_vals.append(sum([WeibullNum(alpha, lambd) for _ in range(n)]))
to_print = (f"The mean of this collection of random variables is: {mean(Z_vals)}"
"\n"
f"The standard deviation of this collection of random variables is: {pstdev(Z_vals)}"
"\n")
print(to_print)
plt.hist(Z_vals)
plt.show()
return
def HistAvg(dist, n, tot):
"""Generate tot random variables Z_j = (1/n)*sum(X_1, X_2, ..., X_n)
Where X_i is an iid random variable with distribution dist for all i
Z_j will be approximately normal(mu=mean(X_1), stdev(X_1)/sqrt(n)) for all 1 <= j <= tot
The histogram plot of all Z_j's will be displayed to help the student visualize the Central Limit Theorem (average case)"""
dist = dist.lower()
Z_vals = []
if dist in ('unif', 'uniform'):
a = float(input("Enter a value for a (lower bound): "))
b = float(input("Enter a value for b (upper bound): "))
for i in range(tot):
Z_vals.append(sum([UnifNum(a, b) for _ in range(n)]))
if dist in ('expo', 'exponential'):
rate = float(input("Enter a value for λ (rate): "))
for i in range(tot):
Z_vals.append(sum([ExpoNum(rate) for _ in range(n)]))
if dist in ('bin', 'bino', 'binomial'):
n_ = int(input("Enter a value for n (number of trials per binomial RV): "))
p = float(input("Enter a value for p (success probability): "))
for i in range(tot):
Z_vals.append(sum([BinomialNum(n_, p) for _ in range(n)]))
if dist in ('geo', 'geom', 'geometric'):
p = float(input("Enter a value for p (success probability): "))
for i in range(tot):
Z_vals.append(sum([GeometricNum(p) for _ in range(n)]))
if dist in ('discrete uniform', 'discunif', 'disc unif', 'dunif'):
a = int(input("Enter a value for a (lower bound): "))
b = int(input("Enter a value for b (upper bound): "))
for i in range(tot):
Z_vals.append(sum([DiscUnifNum2(a, b) for _ in range(n)]))
if dist in ('pareto',):
alpha = float(input("Enter a value for α (scale parameter): "))
for i in range(tot):
Z_vals.append(sum([ParetoNum(alpha) for _ in range(n)]))
if dist in ('weibull', 'wb', 'wbl'):
alpha = float(input("Enter a value for α (shape parameter): "))
lambd = float(input("Enter a value for λ (scale parameter): "))
for i in range(tot):
Z_vals.append(sum([WeibullNum(alpha, lambd) for _ in range(n)]))
Z_vals = [_/n for _ in Z_vals]
to_print = (f"The mean of this collection of random variables is: {mean(Z_vals)}"
"\n"
f"The standard deviation of this collection of random variables is: {pstdev(Z_vals)}"
"\n")
print(to_print)
plt.hist(Z_vals)
plt.show()
return
if __name__ == '__main__':
if len(argv) == 1:
print("\nAdditional command line argument needed to determine whether to use sum or average.")
exit(1)
argv[1] = argv[1].lower()
if argv[1] in ('sum', 's'):
use_sum = True
elif argv[1] in ('avg', 'a', 'average'):
use_sum = False
else:
print("\nInvalid command line argument for using sum or average.")
exit(1)
dist = input("Enter a distribution to use: ")
n = int(input("Enter a value for n (Z = X1 + X2 + .. + Xn): "))
tot = int(input("Enter a value for the total number of random variables to generate for the histogram: "))
if use_sum:
HistSum(dist, n, tot)
else:
HistAvg(dist, n, tot) | 42.609375 | 138 | 0.663733 | 929 | 5,454 | 3.813778 | 0.170075 | 0.05532 | 0.077618 | 0.108383 | 0.815693 | 0.798194 | 0.791984 | 0.791984 | 0.78521 | 0.764889 | 0 | 0.004944 | 0.184085 | 5,454 | 128 | 139 | 42.609375 | 0.791236 | 0.156582 | 0 | 0.8 | 0 | 0 | 0.373773 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018182 | false | 0 | 0.036364 | 0 | 0.072727 | 0.054545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1361079659990441df2d1b3feeb16f101b84de1d | 250 | py | Python | pyplan_core/cubepy/__init__.py | pyplan/pyplan-core | 21b991a16feb1141b3ff7e3ac75a3aee54f80d0d | [
"MIT"
] | 4 | 2020-04-29T20:24:44.000Z | 2021-03-03T17:09:32.000Z | pyplan_core/cubepy/__init__.py | pyplan/pyplan-core | 21b991a16feb1141b3ff7e3ac75a3aee54f80d0d | [
"MIT"
] | 2 | 2020-08-24T17:49:00.000Z | 2021-01-19T16:09:03.000Z | pyplan_core/cubepy/__init__.py | pyplan/pyplan-core | 21b991a16feb1141b3ff7e3ac75a3aee54f80d0d | [
"MIT"
] | 4 | 2021-01-23T13:06:31.000Z | 2021-12-16T13:11:40.000Z | from pyplan_core.cubepy.version import __version__
from pyplan_core.cubepy.axis import Axis
from pyplan_core.cubepy.cube import Cube, concatenate, stack, apply_op
from pyplan_core.cubepy.index import Index
from pyplan_core.cubepy.exceptions import *
| 41.666667 | 70 | 0.852 | 38 | 250 | 5.342105 | 0.368421 | 0.246305 | 0.344828 | 0.492611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092 | 250 | 5 | 71 | 50 | 0.894273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
137aeb2ce2f288cb16e68862d335c5fbb4b5f7e4 | 1,844 | py | Python | test/PR_test/unit_test/backend/test_tensor_pow.py | hanskrupakar/fastestimator | 1c3fe89ad8b012991b524a6c48f328b2a80dc9f6 | [
"Apache-2.0"
] | null | null | null | test/PR_test/unit_test/backend/test_tensor_pow.py | hanskrupakar/fastestimator | 1c3fe89ad8b012991b524a6c48f328b2a80dc9f6 | [
"Apache-2.0"
] | null | null | null | test/PR_test/unit_test/backend/test_tensor_pow.py | hanskrupakar/fastestimator | 1c3fe89ad8b012991b524a6c48f328b2a80dc9f6 | [
"Apache-2.0"
] | null | null | null | import unittest
import numpy as np
import tensorflow as tf
import torch
import fastestimator as fe
class TestTensorPow(unittest.TestCase):
def test_np_input_pow_gt_1(self):
n = np.array([[1, 4, 6], [2.3, 0.5, 0]])
target = np.array([[1, 8.44485063e+01, 3.09089322e+02], [1.43723927e+01, 1.08818820e-01, 0]])
b = fe.backend.tensor_pow(n, 3.2)
self.assertTrue(np.allclose(b, target))
def test_np_input_pow_lt_1(self):
n = np.array([[1, 4, 6], [2.3, 0.5, 0]])
target = np.array([[1, 1.33792755, 1.45683968], [1.1911401, 0.86453723, 0]])
b = fe.backend.tensor_pow(n, 0.21)
self.assertTrue(np.allclose(b, target))
def test_tf_input_pow_gt_1(self):
n = tf.convert_to_tensor([[1, 4, 6], [2.3, 0.5, 0]])
target = tf.convert_to_tensor([[1, 8.44485063e+01, 3.09089322e+02], [1.43723927e+01, 1.08818820e-01, 0]])
b = fe.backend.tensor_pow(n, 3.2)
self.assertTrue(np.allclose(b, target))
def test_tf_input_pow_lt_1(self):
n = tf.convert_to_tensor([[1, 4, 6], [2.3, 0.5, 0]])
target = tf.convert_to_tensor([[1, 1.33792755, 1.45683968], [1.1911401, 0.86453723, 0]])
b = fe.backend.tensor_pow(n, 0.21)
self.assertTrue(np.allclose(b, target))
def test_torch_input_pow_gt_1(self):
n = torch.tensor([[1, 4, 6], [2.3, 0.5, 0]])
target = torch.tensor([[1, 8.44485063e+01, 3.09089322e+02], [1.43723927e+01, 1.08818820e-01, 0]])
b = fe.backend.tensor_pow(n, 3.2)
self.assertTrue(np.allclose(b, target))
def test_torch_input_pow_lt_1(self):
n = torch.tensor([[1, 4, 6], [2.3, 0.5, 0]])
target = torch.tensor([[1, 1.33792755, 1.45683968], [1.1911401, 0.86453723, 0]])
b = fe.backend.tensor_pow(n, 0.21)
self.assertTrue(np.allclose(b, target))
| 40.086957 | 113 | 0.609002 | 316 | 1,844 | 3.414557 | 0.155063 | 0.0519 | 0.033364 | 0.022243 | 0.891566 | 0.881372 | 0.855422 | 0.855422 | 0.855422 | 0.855422 | 0 | 0.218257 | 0.20987 | 1,844 | 45 | 114 | 40.977778 | 0.522306 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0 | 0.138889 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
13881809a198f61142809e000eece490a5c1839f | 84 | py | Python | src/components/posts/__init__.py | john9384/PyblogRestAPI | f8cd42b6ffd5ccc3224d18f71cbea654f05023d0 | [
"MIT"
] | null | null | null | src/components/posts/__init__.py | john9384/PyblogRestAPI | f8cd42b6ffd5ccc3224d18f71cbea654f05023d0 | [
"MIT"
] | null | null | null | src/components/posts/__init__.py | john9384/PyblogRestAPI | f8cd42b6ffd5ccc3224d18f71cbea654f05023d0 | [
"MIT"
] | null | null | null | from src.components.posts.forms import *
from src.components.posts.routes import *
| 21 | 41 | 0.797619 | 12 | 84 | 5.583333 | 0.583333 | 0.208955 | 0.507463 | 0.656716 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 84 | 3 | 42 | 28 | 0.893333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
13ed567706867bf4be7372d05c451a100cfc636b | 369,876 | py | Python | code/python/QuotesAPIforDigitalPortals/v3/fds/sdk/QuotesAPIforDigitalPortals/api/notation_api.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 6 | 2022-02-07T16:34:18.000Z | 2022-03-30T08:04:57.000Z | code/python/QuotesAPIforDigitalPortals/v3/fds/sdk/QuotesAPIforDigitalPortals/api/notation_api.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 2 | 2022-02-07T05:25:57.000Z | 2022-03-07T14:18:04.000Z | code/python/QuotesAPIforDigitalPortals/v3/fds/sdk/QuotesAPIforDigitalPortals/api/notation_api.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | null | null | null | """
Quotes API For Digital Portals
The quotes API combines endpoints for retrieving security end-of-day, delayed, and realtime prices with performance key figures and basic reference data on the security and market level. The API supports over 20 different price types for each quote and comes with basic search endpoints based on security identifiers and instrument names. Market coverage is included in the *Sample Use Cases* section below. The Digital Portal use case is focused on high-performance applications that are * serving millions of end-users, * accessible by client browsers via the internet, * supporting subscriptions for streamed updates out-of-the-box, * typically combining a wide variety of *for Digital Portals*-APIs into a highly use-case specific solution for customers, * integrated into complex infrastructures such as existing frontend frameworks, authentication services. All APIs labelled *for Digital Portals* have been designed for direct use by client web applications and feature extreme low latency: The average response time across all endpoints is 30 ms whereas 99% of all requests are answered in close to under 300ms. See the Time Series API for Digital Portals for direct access to price histories, and the News API for Digital Portals for searching and fetching related news. # noqa: E501
The version of the OpenAPI document: 2
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from multiprocessing.pool import ApplyResult
import typing
from fds.sdk.QuotesAPIforDigitalPortals.api_client import ApiClient, Endpoint as _Endpoint
from fds.sdk.QuotesAPIforDigitalPortals.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from fds.sdk.QuotesAPIforDigitalPortals.exceptions import ApiException
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_object16 import InlineObject16
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_object17 import InlineObject17
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_object18 import InlineObject18
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_object19 import InlineObject19
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_object20 import InlineObject20
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_object22 import InlineObject22
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20064 import InlineResponse20064
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20065 import InlineResponse20065
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20066 import InlineResponse20066
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20067 import InlineResponse20067
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20068 import InlineResponse20068
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20069 import InlineResponse20069
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20070 import InlineResponse20070
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20071 import InlineResponse20071
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20072 import InlineResponse20072
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20073 import InlineResponse20073
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20074 import InlineResponse20074
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20075 import InlineResponse20075
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20076 import InlineResponse20076
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20077 import InlineResponse20077
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20078 import InlineResponse20078
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20088 import InlineResponse20088
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20089 import InlineResponse20089
from fds.sdk.QuotesAPIforDigitalPortals.model.inline_response20090 import InlineResponse20090
class NotationApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.get_notation_cross_reference_fact_set_identifier_get_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20069,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/crossReference/factSetIdentifier/get',
'operation_id': 'get_notation_cross_reference_fact_set_identifier_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'attributes',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
'attributes',
]
},
root_map={
'validations': {
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'attributes':
([str],),
},
'attribute_map': {
'id': 'id',
'attributes': '_attributes',
},
'location_map': {
'id': 'query',
'attributes': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_cross_reference_get_by_fact_set_market_symbol_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20066,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/crossReference/getByFactSetMarketSymbol',
'operation_id': 'get_notation_cross_reference_get_by_fact_set_market_symbol',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'fact_set_market_symbol',
'attributes',
'language',
],
'required': [
'fact_set_market_symbol',
],
'nullable': [
],
'enum': [
],
'validation': [
'fact_set_market_symbol',
'attributes',
'language',
]
},
root_map={
'validations': {
('fact_set_market_symbol',): {
'max_length': 32,
'min_length': 1,
},
('attributes',): {
'max_items': 50,
},
('language',): {
'max_length': 2,
'min_length': 2,
},
},
'allowed_values': {
},
'openapi_types': {
'fact_set_market_symbol':
(str,),
'attributes':
([str],),
'language':
(str,),
},
'attribute_map': {
'fact_set_market_symbol': 'factSetMarketSymbol',
'attributes': '_attributes',
'language': '_language',
},
'location_map': {
'fact_set_market_symbol': 'query',
'attributes': 'query',
'language': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_get_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20064,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/get',
'operation_id': 'get_notation_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'attributes',
'language',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
'attributes',
'language',
]
},
root_map={
'validations': {
('attributes',): {
'max_items': 50,
},
('language',): {
'max_length': 2,
'min_length': 2,
},
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'attributes':
([str],),
'language':
(str,),
},
'attribute_map': {
'id': 'id',
'attributes': '_attributes',
'language': '_language',
},
'location_map': {
'id': 'query',
'attributes': 'query',
'language': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_month_1_get_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20072,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/month/1/get',
'operation_id': 'get_notation_key_figures_month_1_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'attributes',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
'attributes',
]
},
root_map={
'validations': {
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'attributes':
([str],),
},
'attribute_map': {
'id': 'id',
'attributes': '_attributes',
},
'location_map': {
'id': 'query',
'attributes': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_month_1_list_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20073,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/month/1/list',
'operation_id': 'get_notation_key_figures_month_1_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'ids',
'attributes',
],
'required': [
'ids',
],
'nullable': [
],
'enum': [
],
'validation': [
'ids',
'attributes',
]
},
root_map={
'validations': {
('ids',): {
'max_items': 100,
},
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'ids':
([str],),
'attributes':
([str],),
},
'attribute_map': {
'ids': 'ids',
'attributes': '_attributes',
},
'location_map': {
'ids': 'query',
'attributes': 'query',
},
'collection_format_map': {
'ids': 'csv',
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_month_3_get_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20074,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/month/3/get',
'operation_id': 'get_notation_key_figures_month_3_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'attributes',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
'attributes',
]
},
root_map={
'validations': {
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'attributes':
([str],),
},
'attribute_map': {
'id': 'id',
'attributes': '_attributes',
},
'location_map': {
'id': 'query',
'attributes': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_month_3_list_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20075,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/month/3/list',
'operation_id': 'get_notation_key_figures_month_3_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'ids',
'attributes',
],
'required': [
'ids',
],
'nullable': [
],
'enum': [
],
'validation': [
'ids',
'attributes',
]
},
root_map={
'validations': {
('ids',): {
'max_items': 100,
},
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'ids':
([str],),
'attributes':
([str],),
},
'attribute_map': {
'ids': 'ids',
'attributes': '_attributes',
},
'location_map': {
'ids': 'query',
'attributes': 'query',
},
'collection_format_map': {
'ids': 'csv',
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_month_6_get_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20074,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/month/6/get',
'operation_id': 'get_notation_key_figures_month_6_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'attributes',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
'attributes',
]
},
root_map={
'validations': {
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'attributes':
([str],),
},
'attribute_map': {
'id': 'id',
'attributes': '_attributes',
},
'location_map': {
'id': 'query',
'attributes': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_month_6_list_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20075,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/month/6/list',
'operation_id': 'get_notation_key_figures_month_6_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'ids',
'attributes',
],
'required': [
'ids',
],
'nullable': [
],
'enum': [
],
'validation': [
'ids',
'attributes',
]
},
root_map={
'validations': {
('ids',): {
'max_items': 100,
},
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'ids':
([str],),
'attributes':
([str],),
},
'attribute_map': {
'ids': 'ids',
'attributes': '_attributes',
},
'location_map': {
'ids': 'query',
'attributes': 'query',
},
'collection_format_map': {
'ids': 'csv',
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_week_1_get_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20072,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/week/1/get',
'operation_id': 'get_notation_key_figures_week_1_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'attributes',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
'attributes',
]
},
root_map={
'validations': {
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'attributes':
([str],),
},
'attribute_map': {
'id': 'id',
'attributes': '_attributes',
},
'location_map': {
'id': 'query',
'attributes': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_week_1_list_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20073,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/week/1/list',
'operation_id': 'get_notation_key_figures_week_1_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'ids',
'attributes',
],
'required': [
'ids',
],
'nullable': [
],
'enum': [
],
'validation': [
'ids',
'attributes',
]
},
root_map={
'validations': {
('ids',): {
'max_items': 100,
},
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'ids':
([str],),
'attributes':
([str],),
},
'attribute_map': {
'ids': 'ids',
'attributes': '_attributes',
},
'location_map': {
'ids': 'query',
'attributes': 'query',
},
'collection_format_map': {
'ids': 'csv',
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_year_1_get_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20072,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/year/1/get',
'operation_id': 'get_notation_key_figures_year_1_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'attributes',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
'attributes',
]
},
root_map={
'validations': {
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'attributes':
([str],),
},
'attribute_map': {
'id': 'id',
'attributes': '_attributes',
},
'location_map': {
'id': 'query',
'attributes': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_year_1_list_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20073,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/year/1/list',
'operation_id': 'get_notation_key_figures_year_1_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'ids',
'attributes',
],
'required': [
'ids',
],
'nullable': [
],
'enum': [
],
'validation': [
'ids',
'attributes',
]
},
root_map={
'validations': {
('ids',): {
'max_items': 100,
},
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'ids':
([str],),
'attributes':
([str],),
},
'attribute_map': {
'ids': 'ids',
'attributes': '_attributes',
},
'location_map': {
'ids': 'query',
'attributes': 'query',
},
'collection_format_map': {
'ids': 'csv',
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_year_3_get_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20074,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/year/3/get',
'operation_id': 'get_notation_key_figures_year_3_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'attributes',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
'attributes',
]
},
root_map={
'validations': {
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'attributes':
([str],),
},
'attribute_map': {
'id': 'id',
'attributes': '_attributes',
},
'location_map': {
'id': 'query',
'attributes': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_year_3_list_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20075,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/year/3/list',
'operation_id': 'get_notation_key_figures_year_3_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'ids',
'attributes',
],
'required': [
'ids',
],
'nullable': [
],
'enum': [
],
'validation': [
'ids',
'attributes',
]
},
root_map={
'validations': {
('ids',): {
'max_items': 100,
},
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'ids':
([str],),
'attributes':
([str],),
},
'attribute_map': {
'ids': 'ids',
'attributes': '_attributes',
},
'location_map': {
'ids': 'query',
'attributes': 'query',
},
'collection_format_map': {
'ids': 'csv',
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_year_5_get_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20074,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/year/5/get',
'operation_id': 'get_notation_key_figures_year_5_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'attributes',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
'attributes',
]
},
root_map={
'validations': {
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'attributes':
([str],),
},
'attribute_map': {
'id': 'id',
'attributes': '_attributes',
},
'location_map': {
'id': 'query',
'attributes': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_year_5_list_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20075,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/year/5/list',
'operation_id': 'get_notation_key_figures_year_5_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'ids',
'attributes',
],
'required': [
'ids',
],
'nullable': [
],
'enum': [
],
'validation': [
'ids',
'attributes',
]
},
root_map={
'validations': {
('ids',): {
'max_items': 100,
},
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'ids':
([str],),
'attributes':
([str],),
},
'attribute_map': {
'ids': 'ids',
'attributes': '_attributes',
},
'location_map': {
'ids': 'query',
'attributes': 'query',
},
'collection_format_map': {
'ids': 'csv',
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_year_to_date_get_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20076,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/yearToDate/get',
'operation_id': 'get_notation_key_figures_year_to_date_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'attributes',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
'attributes',
]
},
root_map={
'validations': {
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'attributes':
([str],),
},
'attribute_map': {
'id': 'id',
'attributes': '_attributes',
},
'location_map': {
'id': 'query',
'attributes': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_key_figures_year_to_date_list_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20077,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/keyFigures/yearToDate/list',
'operation_id': 'get_notation_key_figures_year_to_date_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'ids',
'attributes',
],
'required': [
'ids',
],
'nullable': [
],
'enum': [
],
'validation': [
'ids',
'attributes',
]
},
root_map={
'validations': {
('ids',): {
'max_items': 100,
},
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
},
'openapi_types': {
'ids':
([str],),
'attributes':
([str],),
},
'attribute_map': {
'ids': 'ids',
'attributes': '_attributes',
},
'location_map': {
'ids': 'query',
'attributes': 'query',
},
'collection_format_map': {
'ids': 'csv',
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_list_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20065,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/list',
'operation_id': 'get_notation_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'ids',
'attributes',
'language',
],
'required': [
'ids',
],
'nullable': [
],
'enum': [
],
'validation': [
'ids',
'attributes',
'language',
]
},
root_map={
'validations': {
('ids',): {
'max_items': 100,
},
('attributes',): {
'max_items': 50,
},
('language',): {
'max_length': 2,
'min_length': 2,
},
},
'allowed_values': {
},
'openapi_types': {
'ids':
([str],),
'attributes':
([str],),
'language':
(str,),
},
'attribute_map': {
'ids': 'ids',
'attributes': '_attributes',
'language': '_language',
},
'location_map': {
'ids': 'query',
'attributes': 'query',
'language': 'query',
},
'collection_format_map': {
'ids': 'csv',
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_search_basic_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20088,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/search/basic',
'operation_id': 'get_notation_search_basic',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'search_value',
'nsins',
'asset_class',
'only_active',
'popularity',
'attributes',
'language',
'pagination_offset',
'pagination_limit',
],
'required': [
'search_value',
],
'nullable': [
],
'enum': [
'nsins',
'asset_class',
],
'validation': [
'search_value',
'nsins',
'attributes',
'language',
'pagination_offset',
'pagination_limit',
]
},
root_map={
'validations': {
('search_value',): {
'max_length': 200,
'min_length': 3,
'regex': {
'pattern': r'^[ -!#-&(-+--\/0-:=?-Za-z\w"]*$', # noqa: E501
},
},
('nsins',): {
},
('attributes',): {
'max_items': 50,
},
('language',): {
'max_length': 2,
'min_length': 2,
},
('pagination_offset',): {
'inclusive_minimum': 0,
},
('pagination_limit',): {
'inclusive_maximum': 500,
'inclusive_minimum': 0,
},
},
'allowed_values': {
('nsins',): {
"WKN": "wkn",
"VALOR": "valor",
"CUSIP": "cusip",
"SEDOL": "sedol"
},
('asset_class',): {
"INDEX": "index",
"STOCK": "stock",
"FUND": "fund",
"ETF": "etf",
"DEBT": "debt",
"INVESTMENTPRODUCT": "investmentProduct",
"LEVERAGEDPRODUCT": "leveragedProduct",
"CURRENCY": "currency",
"COMMODITY": "commodity",
"OPTION": "option",
"FUTURE": "future",
"INTERESTRATE": "interestRate"
},
},
'openapi_types': {
'search_value':
(str,),
'nsins':
([str],),
'asset_class':
(str,),
'only_active':
(bool,),
'popularity':
(bool,),
'attributes':
([str],),
'language':
(str,),
'pagination_offset':
(float,),
'pagination_limit':
(float,),
},
'attribute_map': {
'search_value': 'searchValue',
'nsins': 'nsins',
'asset_class': 'assetClass',
'only_active': 'onlyActive',
'popularity': 'popularity',
'attributes': '_attributes',
'language': '_language',
'pagination_offset': '_paginationOffset',
'pagination_limit': '_paginationLimit',
},
'location_map': {
'search_value': 'query',
'nsins': 'query',
'asset_class': 'query',
'only_active': 'query',
'popularity': 'query',
'attributes': 'query',
'language': 'query',
'pagination_offset': 'query',
'pagination_limit': 'query',
},
'collection_format_map': {
'nsins': 'csv',
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_search_by_text_ranked_by_volume_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20090,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/searchByTextRankedByVolume',
'operation_id': 'get_notation_search_by_text_ranked_by_volume',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'search_value',
'id_markets',
'nsins',
'asset_class',
'only_active',
'attributes',
'language',
'pagination_offset',
'pagination_limit',
],
'required': [
'search_value',
],
'nullable': [
],
'enum': [
'nsins',
'asset_class',
],
'validation': [
'search_value',
'id_markets',
'nsins',
'attributes',
'language',
'pagination_offset',
'pagination_limit',
]
},
root_map={
'validations': {
('search_value',): {
'max_length': 100,
'min_length': 3,
'regex': {
'pattern': r'^[ -!#-&(-+--\/0-:=?-Za-z]*$', # noqa: E501
},
},
('id_markets',): {
'max_items': 100,
},
('nsins',): {
},
('attributes',): {
'max_items': 50,
},
('language',): {
'max_length': 2,
'min_length': 2,
},
('pagination_offset',): {
'inclusive_minimum': 0,
},
('pagination_limit',): {
'inclusive_maximum': 500,
'inclusive_minimum': 0,
},
},
'allowed_values': {
('nsins',): {
"WKN": "wkn",
"VALOR": "valor",
"CUSIP": "cusip",
"SEDOL": "sedol"
},
('asset_class',): {
"INDEX": "index",
"STOCK": "stock",
"FUND": "fund",
"ETF": "etf",
"DEBT": "debt",
"INVESTMENTPRODUCT": "investmentProduct",
"LEVERAGEDPRODUCT": "leveragedProduct",
"CURRENCY": "currency",
"COMMODITY": "commodity",
"OPTION": "option",
"FUTURE": "future",
"INTERESTRATE": "interestRate"
},
},
'openapi_types': {
'search_value':
(str,),
'id_markets':
([float],),
'nsins':
([str],),
'asset_class':
([str],),
'only_active':
(bool,),
'attributes':
([str],),
'language':
(str,),
'pagination_offset':
(float,),
'pagination_limit':
(float,),
},
'attribute_map': {
'search_value': 'searchValue',
'id_markets': 'idMarkets',
'nsins': 'nsins',
'asset_class': 'assetClass',
'only_active': 'onlyActive',
'attributes': '_attributes',
'language': '_language',
'pagination_offset': '_paginationOffset',
'pagination_limit': '_paginationLimit',
},
'location_map': {
'search_value': 'query',
'id_markets': 'query',
'nsins': 'query',
'asset_class': 'query',
'only_active': 'query',
'attributes': 'query',
'language': 'query',
'pagination_offset': 'query',
'pagination_limit': 'query',
},
'collection_format_map': {
'id_markets': 'csv',
'nsins': 'csv',
'asset_class': 'csv',
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_notation_status_get_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20078,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/status/get',
'operation_id': 'get_notation_status_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'quality',
'attributes',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
'quality',
],
'validation': [
'attributes',
]
},
root_map={
'validations': {
('attributes',): {
'max_items': 50,
},
},
'allowed_values': {
('quality',): {
"RLT": "RLT",
"DLY": "DLY",
"BST": "BST"
},
},
'openapi_types': {
'id':
(str,),
'quality':
(str,),
'attributes':
([str],),
},
'attribute_map': {
'id': 'id',
'quality': 'quality',
'attributes': '_attributes',
},
'location_map': {
'id': 'query',
'quality': 'query',
'attributes': 'query',
},
'collection_format_map': {
'attributes': 'csv',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.post_notation_cross_reference_fact_set_identifier_list_by_fact_set_identifier_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20070,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/crossReference/factSetIdentifier/listByFactSetIdentifier',
'operation_id': 'post_notation_cross_reference_fact_set_identifier_list_by_fact_set_identifier',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'body',
],
'required': [
'body',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'body':
(InlineObject19,),
},
'attribute_map': {
},
'location_map': {
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.post_notation_cross_reference_fact_set_identifier_list_by_instrument_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20071,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/crossReference/factSetIdentifier/listByInstrument',
'operation_id': 'post_notation_cross_reference_fact_set_identifier_list_by_instrument',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'body',
],
'required': [
'body',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'body':
(InlineObject20,),
},
'attribute_map': {
},
'location_map': {
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.post_notation_cross_reference_list_by_instrument_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20067,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/crossReference/listByInstrument',
'operation_id': 'post_notation_cross_reference_list_by_instrument',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'body',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'body':
(InlineObject16,),
},
'attribute_map': {
},
'location_map': {
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.post_notation_cross_reference_list_by_isin_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20067,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/crossReference/listByISIN',
'operation_id': 'post_notation_cross_reference_list_by_isin',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'body',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'body':
(InlineObject17,),
},
'attribute_map': {
},
'location_map': {
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.post_notation_cross_reference_list_by_symbol_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20068,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/crossReference/listBySymbol',
'operation_id': 'post_notation_cross_reference_list_by_symbol',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'body',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'body':
(InlineObject18,),
},
'attribute_map': {
},
'location_map': {
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.post_notation_search_by_text_endpoint = _Endpoint(
settings={
'response_type': (
{ 200: (InlineResponse20089,), },
None
),
'auth': [
'FactSetApiKey',
'FactSetOAuth2'
],
'endpoint_path': '/notation/searchByText',
'operation_id': 'post_notation_search_by_text',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'body',
],
'required': [
'body',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'body':
(InlineObject22,),
},
'attribute_map': {
},
'location_map': {
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
@staticmethod
def apply_kwargs_defaults(kwargs, return_http_data_only, async_req):
kwargs["async_req"] = async_req
kwargs["_return_http_data_only"] = return_http_data_only
kwargs["_preload_content"] = kwargs.get("_preload_content", True)
kwargs["_request_timeout"] = kwargs.get("_request_timeout", None)
kwargs["_check_input_type"] = kwargs.get("_check_input_type", True)
kwargs["_check_return_type"] = kwargs.get("_check_return_type", True)
kwargs["_spec_property_naming"] = kwargs.get("_spec_property_naming", False)
kwargs["_content_type"] = kwargs.get("_content_type")
kwargs["_host_index"] = kwargs.get("_host_index")
def get_notation_cross_reference_fact_set_identifier_get(
self,
id,
**kwargs
) -> InlineResponse20069:
"""Retrieve FactSet identifiers for a given notation. # noqa: E501
<p>Retrieve FactSet identifiers for a given notation. Security and listing-level identifiers are always included, regional level identifiers are included, if available. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
id (str): Identifier of a notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20069
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['id'] = \
id
return self.get_notation_cross_reference_fact_set_identifier_get_endpoint.call_with_http_info(**kwargs)
def get_notation_cross_reference_fact_set_identifier_get_with_http_info(
self,
id,
**kwargs
) -> typing.Tuple[InlineResponse20069, int, typing.MutableMapping]:
"""Retrieve FactSet identifiers for a given notation. # noqa: E501
<p>Retrieve FactSet identifiers for a given notation. Security and listing-level identifiers are always included, regional level identifiers are included, if available. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
id (str): Identifier of a notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20069
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['id'] = \
id
return self.get_notation_cross_reference_fact_set_identifier_get_endpoint.call_with_http_info(**kwargs)
def get_notation_cross_reference_fact_set_identifier_get_async(
self,
id,
**kwargs
) -> "ApplyResult[InlineResponse20069]":
"""Retrieve FactSet identifiers for a given notation. # noqa: E501
<p>Retrieve FactSet identifiers for a given notation. Security and listing-level identifiers are always included, regional level identifiers are included, if available. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
id (str): Identifier of a notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20069]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['id'] = \
id
return self.get_notation_cross_reference_fact_set_identifier_get_endpoint.call_with_http_info(**kwargs)
def get_notation_cross_reference_fact_set_identifier_get_with_http_info_async(
self,
id,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20069, int, typing.MutableMapping]]":
"""Retrieve FactSet identifiers for a given notation. # noqa: E501
<p>Retrieve FactSet identifiers for a given notation. Security and listing-level identifiers are always included, regional level identifiers are included, if available. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
id (str): Identifier of a notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20069, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['id'] = \
id
return self.get_notation_cross_reference_fact_set_identifier_get_endpoint.call_with_http_info(**kwargs)
def get_notation_cross_reference_get_by_fact_set_market_symbol(
self,
fact_set_market_symbol,
**kwargs
) -> InlineResponse20066:
"""Translate a FactSet market symbol to a notation. # noqa: E501
Translate a FactSet market symbol to a notation. This symbol is also known as TICKER_EXCHANGE. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
fact_set_market_symbol (str): Market symbol defined by FactSet to identify a notation (i.e. TICKER_EXCHANGE).
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20066
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['fact_set_market_symbol'] = \
fact_set_market_symbol
return self.get_notation_cross_reference_get_by_fact_set_market_symbol_endpoint.call_with_http_info(**kwargs)
def get_notation_cross_reference_get_by_fact_set_market_symbol_with_http_info(
self,
fact_set_market_symbol,
**kwargs
) -> typing.Tuple[InlineResponse20066, int, typing.MutableMapping]:
"""Translate a FactSet market symbol to a notation. # noqa: E501
Translate a FactSet market symbol to a notation. This symbol is also known as TICKER_EXCHANGE. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
fact_set_market_symbol (str): Market symbol defined by FactSet to identify a notation (i.e. TICKER_EXCHANGE).
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20066
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['fact_set_market_symbol'] = \
fact_set_market_symbol
return self.get_notation_cross_reference_get_by_fact_set_market_symbol_endpoint.call_with_http_info(**kwargs)
def get_notation_cross_reference_get_by_fact_set_market_symbol_async(
self,
fact_set_market_symbol,
**kwargs
) -> "ApplyResult[InlineResponse20066]":
"""Translate a FactSet market symbol to a notation. # noqa: E501
Translate a FactSet market symbol to a notation. This symbol is also known as TICKER_EXCHANGE. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
fact_set_market_symbol (str): Market symbol defined by FactSet to identify a notation (i.e. TICKER_EXCHANGE).
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20066]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['fact_set_market_symbol'] = \
fact_set_market_symbol
return self.get_notation_cross_reference_get_by_fact_set_market_symbol_endpoint.call_with_http_info(**kwargs)
def get_notation_cross_reference_get_by_fact_set_market_symbol_with_http_info_async(
self,
fact_set_market_symbol,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20066, int, typing.MutableMapping]]":
"""Translate a FactSet market symbol to a notation. # noqa: E501
Translate a FactSet market symbol to a notation. This symbol is also known as TICKER_EXCHANGE. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
fact_set_market_symbol (str): Market symbol defined by FactSet to identify a notation (i.e. TICKER_EXCHANGE).
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20066, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['fact_set_market_symbol'] = \
fact_set_market_symbol
return self.get_notation_cross_reference_get_by_fact_set_market_symbol_endpoint.call_with_http_info(**kwargs)
def get_notation_get(
self,
id,
**kwargs
) -> InlineResponse20064:
"""Basic data for a notation. # noqa: E501
Basic data for a notation. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
id (str): Identifier of a notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20064
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['id'] = \
id
return self.get_notation_get_endpoint.call_with_http_info(**kwargs)
def get_notation_get_with_http_info(
self,
id,
**kwargs
) -> typing.Tuple[InlineResponse20064, int, typing.MutableMapping]:
"""Basic data for a notation. # noqa: E501
Basic data for a notation. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
id (str): Identifier of a notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20064
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['id'] = \
id
return self.get_notation_get_endpoint.call_with_http_info(**kwargs)
def get_notation_get_async(
self,
id,
**kwargs
) -> "ApplyResult[InlineResponse20064]":
"""Basic data for a notation. # noqa: E501
Basic data for a notation. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
id (str): Identifier of a notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20064]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['id'] = \
id
return self.get_notation_get_endpoint.call_with_http_info(**kwargs)
def get_notation_get_with_http_info_async(
self,
id,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20064, int, typing.MutableMapping]]":
"""Basic data for a notation. # noqa: E501
Basic data for a notation. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
id (str): Identifier of a notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20064, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['id'] = \
id
return self.get_notation_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_1_get(
self,
id,
**kwargs
) -> InlineResponse20072:
"""End-of-day (EOD) key figures for the time range of one month. # noqa: E501
End-of-day (EOD) key figures for the time range of one month. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20072
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_1_get_with_http_info(
self,
id,
**kwargs
) -> typing.Tuple[InlineResponse20072, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of one month. # noqa: E501
End-of-day (EOD) key figures for the time range of one month. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20072
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_1_get_async(
self,
id,
**kwargs
) -> "ApplyResult[InlineResponse20072]":
"""End-of-day (EOD) key figures for the time range of one month. # noqa: E501
End-of-day (EOD) key figures for the time range of one month. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20072]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_1_get_with_http_info_async(
self,
id,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20072, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of one month. # noqa: E501
End-of-day (EOD) key figures for the time range of one month. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20072, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_1_list(
self,
ids,
**kwargs
) -> InlineResponse20073:
"""End-of-day (EOD) key figures for the time range of one month, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one month, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20073
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_1_list_with_http_info(
self,
ids,
**kwargs
) -> typing.Tuple[InlineResponse20073, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of one month, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one month, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20073
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_1_list_async(
self,
ids,
**kwargs
) -> "ApplyResult[InlineResponse20073]":
"""End-of-day (EOD) key figures for the time range of one month, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one month, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20073]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_1_list_with_http_info_async(
self,
ids,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20073, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of one month, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one month, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20073, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_3_get(
self,
id,
**kwargs
) -> InlineResponse20074:
"""End-of-day (EOD) key figures for the time range of three months. # noqa: E501
End-of-day (EOD) key figures for the time range of three months. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20074
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_3_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_3_get_with_http_info(
self,
id,
**kwargs
) -> typing.Tuple[InlineResponse20074, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of three months. # noqa: E501
End-of-day (EOD) key figures for the time range of three months. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20074
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_3_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_3_get_async(
self,
id,
**kwargs
) -> "ApplyResult[InlineResponse20074]":
"""End-of-day (EOD) key figures for the time range of three months. # noqa: E501
End-of-day (EOD) key figures for the time range of three months. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20074]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_3_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_3_get_with_http_info_async(
self,
id,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20074, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of three months. # noqa: E501
End-of-day (EOD) key figures for the time range of three months. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20074, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_3_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_3_list(
self,
ids,
**kwargs
) -> InlineResponse20075:
"""End-of-day (EOD) key figures for the time range of three months, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of three months, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20075
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_3_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_3_list_with_http_info(
self,
ids,
**kwargs
) -> typing.Tuple[InlineResponse20075, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of three months, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of three months, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20075
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_3_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_3_list_async(
self,
ids,
**kwargs
) -> "ApplyResult[InlineResponse20075]":
"""End-of-day (EOD) key figures for the time range of three months, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of three months, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20075]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_3_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_3_list_with_http_info_async(
self,
ids,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20075, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of three months, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of three months, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20075, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_3_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_6_get(
self,
id,
**kwargs
) -> InlineResponse20074:
"""End-of-day (EOD) key figures for the time range of six months. # noqa: E501
End-of-day (EOD) key figures for the time range of six months. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20074
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_6_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_6_get_with_http_info(
self,
id,
**kwargs
) -> typing.Tuple[InlineResponse20074, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of six months. # noqa: E501
End-of-day (EOD) key figures for the time range of six months. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20074
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_6_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_6_get_async(
self,
id,
**kwargs
) -> "ApplyResult[InlineResponse20074]":
"""End-of-day (EOD) key figures for the time range of six months. # noqa: E501
End-of-day (EOD) key figures for the time range of six months. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20074]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_6_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_6_get_with_http_info_async(
self,
id,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20074, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of six months. # noqa: E501
End-of-day (EOD) key figures for the time range of six months. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20074, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_month_6_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_6_list(
self,
ids,
**kwargs
) -> InlineResponse20075:
"""End-of-day (EOD) key figures for the time range of six months, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of six months, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20075
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_6_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_6_list_with_http_info(
self,
ids,
**kwargs
) -> typing.Tuple[InlineResponse20075, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of six months, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of six months, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20075
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_6_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_6_list_async(
self,
ids,
**kwargs
) -> "ApplyResult[InlineResponse20075]":
"""End-of-day (EOD) key figures for the time range of six months, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of six months, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20075]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_6_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_month_6_list_with_http_info_async(
self,
ids,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20075, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of six months, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of six months, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20075, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_month_6_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_week_1_get(
self,
id,
**kwargs
) -> InlineResponse20072:
"""End-of-day (EOD) key figures for the time range of one week. # noqa: E501
End-of-day (EOD) key figures for the time range of one week. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20072
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_week_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_week_1_get_with_http_info(
self,
id,
**kwargs
) -> typing.Tuple[InlineResponse20072, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of one week. # noqa: E501
End-of-day (EOD) key figures for the time range of one week. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20072
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_week_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_week_1_get_async(
self,
id,
**kwargs
) -> "ApplyResult[InlineResponse20072]":
"""End-of-day (EOD) key figures for the time range of one week. # noqa: E501
End-of-day (EOD) key figures for the time range of one week. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20072]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_week_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_week_1_get_with_http_info_async(
self,
id,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20072, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of one week. # noqa: E501
End-of-day (EOD) key figures for the time range of one week. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20072, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_week_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_week_1_list(
self,
ids,
**kwargs
) -> InlineResponse20073:
"""End-of-day (EOD) key figures for the time range of one week, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one week, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20073
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_week_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_week_1_list_with_http_info(
self,
ids,
**kwargs
) -> typing.Tuple[InlineResponse20073, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of one week, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one week, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20073
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_week_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_week_1_list_async(
self,
ids,
**kwargs
) -> "ApplyResult[InlineResponse20073]":
"""End-of-day (EOD) key figures for the time range of one week, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one week, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20073]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_week_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_week_1_list_with_http_info_async(
self,
ids,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20073, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of one week, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one week, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20073, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_week_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_1_get(
self,
id,
**kwargs
) -> InlineResponse20072:
"""End-of-day (EOD) key figures for the time range of one year. # noqa: E501
End-of-day (EOD) key figures for the time range of one year. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20072
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_1_get_with_http_info(
self,
id,
**kwargs
) -> typing.Tuple[InlineResponse20072, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of one year. # noqa: E501
End-of-day (EOD) key figures for the time range of one year. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20072
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_1_get_async(
self,
id,
**kwargs
) -> "ApplyResult[InlineResponse20072]":
"""End-of-day (EOD) key figures for the time range of one year. # noqa: E501
End-of-day (EOD) key figures for the time range of one year. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20072]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_1_get_with_http_info_async(
self,
id,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20072, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of one year. # noqa: E501
End-of-day (EOD) key figures for the time range of one year. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20072, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_1_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_1_list(
self,
ids,
**kwargs
) -> InlineResponse20073:
"""End-of-day (EOD) key figures for the time range of one year, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one year, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20073
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_1_list_with_http_info(
self,
ids,
**kwargs
) -> typing.Tuple[InlineResponse20073, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of one year, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one year, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20073
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_1_list_async(
self,
ids,
**kwargs
) -> "ApplyResult[InlineResponse20073]":
"""End-of-day (EOD) key figures for the time range of one year, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one year, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20073]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_1_list_with_http_info_async(
self,
ids,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20073, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of one year, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of one year, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20073, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_1_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_3_get(
self,
id,
**kwargs
) -> InlineResponse20074:
"""End-of-day (EOD) key figures for the time range of three years. # noqa: E501
End-of-day (EOD) key figures for the time range of three years. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20074
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_3_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_3_get_with_http_info(
self,
id,
**kwargs
) -> typing.Tuple[InlineResponse20074, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of three years. # noqa: E501
End-of-day (EOD) key figures for the time range of three years. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20074
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_3_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_3_get_async(
self,
id,
**kwargs
) -> "ApplyResult[InlineResponse20074]":
"""End-of-day (EOD) key figures for the time range of three years. # noqa: E501
End-of-day (EOD) key figures for the time range of three years. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20074]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_3_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_3_get_with_http_info_async(
self,
id,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20074, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of three years. # noqa: E501
End-of-day (EOD) key figures for the time range of three years. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20074, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_3_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_3_list(
self,
ids,
**kwargs
) -> InlineResponse20075:
"""End-of-day (EOD) key figures for the time range of three years, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of three years, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20075
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_3_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_3_list_with_http_info(
self,
ids,
**kwargs
) -> typing.Tuple[InlineResponse20075, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of three years, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of three years, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20075
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_3_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_3_list_async(
self,
ids,
**kwargs
) -> "ApplyResult[InlineResponse20075]":
"""End-of-day (EOD) key figures for the time range of three years, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of three years, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20075]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_3_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_3_list_with_http_info_async(
self,
ids,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20075, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of three years, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of three years, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20075, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_3_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_5_get(
self,
id,
**kwargs
) -> InlineResponse20074:
"""End-of-day (EOD) key figures for the time range of five years. # noqa: E501
End-of-day (EOD) key figures for the time range of five years. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20074
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_5_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_5_get_with_http_info(
self,
id,
**kwargs
) -> typing.Tuple[InlineResponse20074, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of five years. # noqa: E501
End-of-day (EOD) key figures for the time range of five years. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20074
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_5_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_5_get_async(
self,
id,
**kwargs
) -> "ApplyResult[InlineResponse20074]":
"""End-of-day (EOD) key figures for the time range of five years. # noqa: E501
End-of-day (EOD) key figures for the time range of five years. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20074]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_5_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_5_get_with_http_info_async(
self,
id,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20074, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of five years. # noqa: E501
End-of-day (EOD) key figures for the time range of five years. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20074, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_5_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_5_list(
self,
ids,
**kwargs
) -> InlineResponse20075:
"""End-of-day (EOD) key figures for the time range of five years, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of five years, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20075
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_5_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_5_list_with_http_info(
self,
ids,
**kwargs
) -> typing.Tuple[InlineResponse20075, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range of five years, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of five years, for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20075
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_5_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_5_list_async(
self,
ids,
**kwargs
) -> "ApplyResult[InlineResponse20075]":
"""End-of-day (EOD) key figures for the time range of five years, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of five years, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20075]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_5_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_5_list_with_http_info_async(
self,
ids,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20075, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range of five years, for a list of notations. # noqa: E501
End-of-day (EOD) key figures for the time range of five years, for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20075, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_5_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_to_date_get(
self,
id,
**kwargs
) -> InlineResponse20076:
"""End-of-day (EOD) key figures for the time range year-to-date (YTD).. # noqa: E501
End-of-day (EOD) key figures for the time range year-to-date (YTD). The time range YTD begins with the last trading day of the previous calendar year for which EOD prices are available and ends with the most recent trading day of the current calendar year for which EOD prices are available.. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20076
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_to_date_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_to_date_get_with_http_info(
self,
id,
**kwargs
) -> typing.Tuple[InlineResponse20076, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range year-to-date (YTD).. # noqa: E501
End-of-day (EOD) key figures for the time range year-to-date (YTD). The time range YTD begins with the last trading day of the previous calendar year for which EOD prices are available and ends with the most recent trading day of the current calendar year for which EOD prices are available.. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20076
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_to_date_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_to_date_get_async(
self,
id,
**kwargs
) -> "ApplyResult[InlineResponse20076]":
"""End-of-day (EOD) key figures for the time range year-to-date (YTD).. # noqa: E501
End-of-day (EOD) key figures for the time range year-to-date (YTD). The time range YTD begins with the last trading day of the previous calendar year for which EOD prices are available and ends with the most recent trading day of the current calendar year for which EOD prices are available.. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20076]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_to_date_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_to_date_get_with_http_info_async(
self,
id,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20076, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range year-to-date (YTD).. # noqa: E501
End-of-day (EOD) key figures for the time range year-to-date (YTD). The time range YTD begins with the last trading day of the previous calendar year for which EOD prices are available and ends with the most recent trading day of the current calendar year for which EOD prices are available.. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
id (str): Identifier of the notation.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20076, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['id'] = \
id
return self.get_notation_key_figures_year_to_date_get_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_to_date_list(
self,
ids,
**kwargs
) -> InlineResponse20077:
"""End-of-day (EOD) key figures for the time range year-to-date (YTD), for a list of notations.. # noqa: E501
End-of-day (EOD) key figures for the time range year-to-date (YTD), for a list of notations. The time range YTD begins with the last trading day of the previous calendar year for which EOD prices are available and ends with the most recent tradingday of the current calendar year for which EOD prices are available.. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20077
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_to_date_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_to_date_list_with_http_info(
self,
ids,
**kwargs
) -> typing.Tuple[InlineResponse20077, int, typing.MutableMapping]:
"""End-of-day (EOD) key figures for the time range year-to-date (YTD), for a list of notations.. # noqa: E501
End-of-day (EOD) key figures for the time range year-to-date (YTD), for a list of notations. The time range YTD begins with the last trading day of the previous calendar year for which EOD prices are available and ends with the most recent tradingday of the current calendar year for which EOD prices are available.. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20077
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_to_date_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_to_date_list_async(
self,
ids,
**kwargs
) -> "ApplyResult[InlineResponse20077]":
"""End-of-day (EOD) key figures for the time range year-to-date (YTD), for a list of notations.. # noqa: E501
End-of-day (EOD) key figures for the time range year-to-date (YTD), for a list of notations. The time range YTD begins with the last trading day of the previous calendar year for which EOD prices are available and ends with the most recent tradingday of the current calendar year for which EOD prices are available.. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20077]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_to_date_list_endpoint.call_with_http_info(**kwargs)
def get_notation_key_figures_year_to_date_list_with_http_info_async(
self,
ids,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20077, int, typing.MutableMapping]]":
"""End-of-day (EOD) key figures for the time range year-to-date (YTD), for a list of notations.. # noqa: E501
End-of-day (EOD) key figures for the time range year-to-date (YTD), for a list of notations. The time range YTD begins with the last trading day of the previous calendar year for which EOD prices are available and ends with the most recent tradingday of the current calendar year for which EOD prices are available.. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20077, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_key_figures_year_to_date_list_endpoint.call_with_http_info(**kwargs)
def get_notation_list(
self,
ids,
**kwargs
) -> InlineResponse20065:
"""Basic data for a list of notations. # noqa: E501
Basic data for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20065
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_list_endpoint.call_with_http_info(**kwargs)
def get_notation_list_with_http_info(
self,
ids,
**kwargs
) -> typing.Tuple[InlineResponse20065, int, typing.MutableMapping]:
"""Basic data for a list of notations. # noqa: E501
Basic data for a list of notations. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20065
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['ids'] = \
ids
return self.get_notation_list_endpoint.call_with_http_info(**kwargs)
def get_notation_list_async(
self,
ids,
**kwargs
) -> "ApplyResult[InlineResponse20065]":
"""Basic data for a list of notations. # noqa: E501
Basic data for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20065]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_list_endpoint.call_with_http_info(**kwargs)
def get_notation_list_with_http_info_async(
self,
ids,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20065, int, typing.MutableMapping]]":
"""Basic data for a list of notations. # noqa: E501
Basic data for a list of notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
ids ([str]): List of notations.
Keyword Args:
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20065, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['ids'] = \
ids
return self.get_notation_list_endpoint.call_with_http_info(**kwargs)
def get_notation_search_basic(
self,
search_value,
**kwargs
) -> InlineResponse20088:
"""Basic search for notations. # noqa: E501
Search for a notation whose ISIN, specified NSINs, name, or symbol match the search value according to a tolerant full-text match algorithm. If more than one notation of an instrument matches, only the notation with the highest monetary trading volume, averaged over one month, is considered. Better matching results appear in the response before less relevant matches. If the parameter popularity is set to true, the popularity of the notation is the primary sort criterion. Popularity is affected mostly by the request frequency of the notation. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
search_value (str): Full-text search string. It may be enclosed in double quotes (\"). No escaping is provided, therefore it is impossible to specify a search string containing double quotes. Relevance of word starts is indicated by a phrase starting with a space character, such as \" daimler\".
Keyword Args:
nsins ([str]): A set of NSIN kinds to consider in the search. If the parameter is absent or the value is empty, all valid NSIN kinds are searched.. [optional]
asset_class (str): A parameter to limit the output to a particular asset class.. [optional]
only_active (bool): If true, restricts the result to active notations.. [optional] if omitted the server will use the default value of True
popularity (bool): If true, the results are sorted by descending popularity.. [optional] if omitted the server will use the default value of False
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
pagination_offset (float): Non-negative number of entries to skip, or 0 (default).. [optional] if omitted the server will use the default value of 0.0
pagination_limit (float): Non-negative maximum number of entries to return.. [optional] if omitted the server will use the default value of 20.0
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20088
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['search_value'] = \
search_value
return self.get_notation_search_basic_endpoint.call_with_http_info(**kwargs)
def get_notation_search_basic_with_http_info(
self,
search_value,
**kwargs
) -> typing.Tuple[InlineResponse20088, int, typing.MutableMapping]:
"""Basic search for notations. # noqa: E501
Search for a notation whose ISIN, specified NSINs, name, or symbol match the search value according to a tolerant full-text match algorithm. If more than one notation of an instrument matches, only the notation with the highest monetary trading volume, averaged over one month, is considered. Better matching results appear in the response before less relevant matches. If the parameter popularity is set to true, the popularity of the notation is the primary sort criterion. Popularity is affected mostly by the request frequency of the notation. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
search_value (str): Full-text search string. It may be enclosed in double quotes (\"). No escaping is provided, therefore it is impossible to specify a search string containing double quotes. Relevance of word starts is indicated by a phrase starting with a space character, such as \" daimler\".
Keyword Args:
nsins ([str]): A set of NSIN kinds to consider in the search. If the parameter is absent or the value is empty, all valid NSIN kinds are searched.. [optional]
asset_class (str): A parameter to limit the output to a particular asset class.. [optional]
only_active (bool): If true, restricts the result to active notations.. [optional] if omitted the server will use the default value of True
popularity (bool): If true, the results are sorted by descending popularity.. [optional] if omitted the server will use the default value of False
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
pagination_offset (float): Non-negative number of entries to skip, or 0 (default).. [optional] if omitted the server will use the default value of 0.0
pagination_limit (float): Non-negative maximum number of entries to return.. [optional] if omitted the server will use the default value of 20.0
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20088
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['search_value'] = \
search_value
return self.get_notation_search_basic_endpoint.call_with_http_info(**kwargs)
def get_notation_search_basic_async(
self,
search_value,
**kwargs
) -> "ApplyResult[InlineResponse20088]":
"""Basic search for notations. # noqa: E501
Search for a notation whose ISIN, specified NSINs, name, or symbol match the search value according to a tolerant full-text match algorithm. If more than one notation of an instrument matches, only the notation with the highest monetary trading volume, averaged over one month, is considered. Better matching results appear in the response before less relevant matches. If the parameter popularity is set to true, the popularity of the notation is the primary sort criterion. Popularity is affected mostly by the request frequency of the notation. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
search_value (str): Full-text search string. It may be enclosed in double quotes (\"). No escaping is provided, therefore it is impossible to specify a search string containing double quotes. Relevance of word starts is indicated by a phrase starting with a space character, such as \" daimler\".
Keyword Args:
nsins ([str]): A set of NSIN kinds to consider in the search. If the parameter is absent or the value is empty, all valid NSIN kinds are searched.. [optional]
asset_class (str): A parameter to limit the output to a particular asset class.. [optional]
only_active (bool): If true, restricts the result to active notations.. [optional] if omitted the server will use the default value of True
popularity (bool): If true, the results are sorted by descending popularity.. [optional] if omitted the server will use the default value of False
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
pagination_offset (float): Non-negative number of entries to skip, or 0 (default).. [optional] if omitted the server will use the default value of 0.0
pagination_limit (float): Non-negative maximum number of entries to return.. [optional] if omitted the server will use the default value of 20.0
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20088]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['search_value'] = \
search_value
return self.get_notation_search_basic_endpoint.call_with_http_info(**kwargs)
def get_notation_search_basic_with_http_info_async(
self,
search_value,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20088, int, typing.MutableMapping]]":
"""Basic search for notations. # noqa: E501
Search for a notation whose ISIN, specified NSINs, name, or symbol match the search value according to a tolerant full-text match algorithm. If more than one notation of an instrument matches, only the notation with the highest monetary trading volume, averaged over one month, is considered. Better matching results appear in the response before less relevant matches. If the parameter popularity is set to true, the popularity of the notation is the primary sort criterion. Popularity is affected mostly by the request frequency of the notation. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
search_value (str): Full-text search string. It may be enclosed in double quotes (\"). No escaping is provided, therefore it is impossible to specify a search string containing double quotes. Relevance of word starts is indicated by a phrase starting with a space character, such as \" daimler\".
Keyword Args:
nsins ([str]): A set of NSIN kinds to consider in the search. If the parameter is absent or the value is empty, all valid NSIN kinds are searched.. [optional]
asset_class (str): A parameter to limit the output to a particular asset class.. [optional]
only_active (bool): If true, restricts the result to active notations.. [optional] if omitted the server will use the default value of True
popularity (bool): If true, the results are sorted by descending popularity.. [optional] if omitted the server will use the default value of False
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
pagination_offset (float): Non-negative number of entries to skip, or 0 (default).. [optional] if omitted the server will use the default value of 0.0
pagination_limit (float): Non-negative maximum number of entries to return.. [optional] if omitted the server will use the default value of 20.0
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20088, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['search_value'] = \
search_value
return self.get_notation_search_basic_endpoint.call_with_http_info(**kwargs)
def get_notation_search_by_text_ranked_by_volume(
self,
search_value,
**kwargs
) -> InlineResponse20090:
"""Basic search for notations. # noqa: E501
Search for notations whose ISIN, specified NSINs, name, or symbol match the search value according to a tolerant full-text match algorithm. If more than one notation of an instrument matches, only the notation with the highest monetary trading volume, averaged over one month, is considered. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
search_value (str): Full-text search string. It may be enclosed in double quotes (\"). No escaping is provided, therefore it is impossible to specify a search string containing double quotes. Relevance of word starts is indicated by a phrase starting with a space character, such as \" daimler\".
Keyword Args:
id_markets ([float]): List of market identifiers. Limits the results to the given markets. For possible values, see endpoint `/basic/market/list`.. [optional]
nsins ([str]): A set of NSIN kinds to consider in the search. If the parameter is absent or the value is empty, all valid NSIN kinds are searched.. [optional]
asset_class ([str]): Limits the results to a particular asset class.. [optional]
only_active (bool): If true, restricts the result to active notations.. [optional] if omitted the server will use the default value of True
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
pagination_offset (float): Non-negative number of entries to skip, or 0 (default).. [optional] if omitted the server will use the default value of 0.0
pagination_limit (float): Non-negative maximum number of entries to return.. [optional] if omitted the server will use the default value of 20.0
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20090
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['search_value'] = \
search_value
return self.get_notation_search_by_text_ranked_by_volume_endpoint.call_with_http_info(**kwargs)
def get_notation_search_by_text_ranked_by_volume_with_http_info(
self,
search_value,
**kwargs
) -> typing.Tuple[InlineResponse20090, int, typing.MutableMapping]:
"""Basic search for notations. # noqa: E501
Search for notations whose ISIN, specified NSINs, name, or symbol match the search value according to a tolerant full-text match algorithm. If more than one notation of an instrument matches, only the notation with the highest monetary trading volume, averaged over one month, is considered. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
search_value (str): Full-text search string. It may be enclosed in double quotes (\"). No escaping is provided, therefore it is impossible to specify a search string containing double quotes. Relevance of word starts is indicated by a phrase starting with a space character, such as \" daimler\".
Keyword Args:
id_markets ([float]): List of market identifiers. Limits the results to the given markets. For possible values, see endpoint `/basic/market/list`.. [optional]
nsins ([str]): A set of NSIN kinds to consider in the search. If the parameter is absent or the value is empty, all valid NSIN kinds are searched.. [optional]
asset_class ([str]): Limits the results to a particular asset class.. [optional]
only_active (bool): If true, restricts the result to active notations.. [optional] if omitted the server will use the default value of True
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
pagination_offset (float): Non-negative number of entries to skip, or 0 (default).. [optional] if omitted the server will use the default value of 0.0
pagination_limit (float): Non-negative maximum number of entries to return.. [optional] if omitted the server will use the default value of 20.0
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20090
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['search_value'] = \
search_value
return self.get_notation_search_by_text_ranked_by_volume_endpoint.call_with_http_info(**kwargs)
def get_notation_search_by_text_ranked_by_volume_async(
self,
search_value,
**kwargs
) -> "ApplyResult[InlineResponse20090]":
"""Basic search for notations. # noqa: E501
Search for notations whose ISIN, specified NSINs, name, or symbol match the search value according to a tolerant full-text match algorithm. If more than one notation of an instrument matches, only the notation with the highest monetary trading volume, averaged over one month, is considered. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
search_value (str): Full-text search string. It may be enclosed in double quotes (\"). No escaping is provided, therefore it is impossible to specify a search string containing double quotes. Relevance of word starts is indicated by a phrase starting with a space character, such as \" daimler\".
Keyword Args:
id_markets ([float]): List of market identifiers. Limits the results to the given markets. For possible values, see endpoint `/basic/market/list`.. [optional]
nsins ([str]): A set of NSIN kinds to consider in the search. If the parameter is absent or the value is empty, all valid NSIN kinds are searched.. [optional]
asset_class ([str]): Limits the results to a particular asset class.. [optional]
only_active (bool): If true, restricts the result to active notations.. [optional] if omitted the server will use the default value of True
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
pagination_offset (float): Non-negative number of entries to skip, or 0 (default).. [optional] if omitted the server will use the default value of 0.0
pagination_limit (float): Non-negative maximum number of entries to return.. [optional] if omitted the server will use the default value of 20.0
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20090]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['search_value'] = \
search_value
return self.get_notation_search_by_text_ranked_by_volume_endpoint.call_with_http_info(**kwargs)
def get_notation_search_by_text_ranked_by_volume_with_http_info_async(
self,
search_value,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20090, int, typing.MutableMapping]]":
"""Basic search for notations. # noqa: E501
Search for notations whose ISIN, specified NSINs, name, or symbol match the search value according to a tolerant full-text match algorithm. If more than one notation of an instrument matches, only the notation with the highest monetary trading volume, averaged over one month, is considered. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
search_value (str): Full-text search string. It may be enclosed in double quotes (\"). No escaping is provided, therefore it is impossible to specify a search string containing double quotes. Relevance of word starts is indicated by a phrase starting with a space character, such as \" daimler\".
Keyword Args:
id_markets ([float]): List of market identifiers. Limits the results to the given markets. For possible values, see endpoint `/basic/market/list`.. [optional]
nsins ([str]): A set of NSIN kinds to consider in the search. If the parameter is absent or the value is empty, all valid NSIN kinds are searched.. [optional]
asset_class ([str]): Limits the results to a particular asset class.. [optional]
only_active (bool): If true, restricts the result to active notations.. [optional] if omitted the server will use the default value of True
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
language (str): ISO 639-1 code of the language.. [optional]
pagination_offset (float): Non-negative number of entries to skip, or 0 (default).. [optional] if omitted the server will use the default value of 0.0
pagination_limit (float): Non-negative maximum number of entries to return.. [optional] if omitted the server will use the default value of 20.0
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20090, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['search_value'] = \
search_value
return self.get_notation_search_by_text_ranked_by_volume_endpoint.call_with_http_info(**kwargs)
def get_notation_status_get(
self,
id,
**kwargs
) -> InlineResponse20078:
"""Intraday trading status of a notation. # noqa: E501
Intraday trading status of a notation. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
id (str): Identifier of a notation.
Keyword Args:
quality (str): Quality of the trading status. The trading status and related data for a notation cannot be retrieved in end-of-day quality (EOD).. [optional] if omitted the server will use the default value of "DLY"
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20078
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['id'] = \
id
return self.get_notation_status_get_endpoint.call_with_http_info(**kwargs)
def get_notation_status_get_with_http_info(
self,
id,
**kwargs
) -> typing.Tuple[InlineResponse20078, int, typing.MutableMapping]:
"""Intraday trading status of a notation. # noqa: E501
Intraday trading status of a notation. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
id (str): Identifier of a notation.
Keyword Args:
quality (str): Quality of the trading status. The trading status and related data for a notation cannot be retrieved in end-of-day quality (EOD).. [optional] if omitted the server will use the default value of "DLY"
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20078
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['id'] = \
id
return self.get_notation_status_get_endpoint.call_with_http_info(**kwargs)
def get_notation_status_get_async(
self,
id,
**kwargs
) -> "ApplyResult[InlineResponse20078]":
"""Intraday trading status of a notation. # noqa: E501
Intraday trading status of a notation. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
id (str): Identifier of a notation.
Keyword Args:
quality (str): Quality of the trading status. The trading status and related data for a notation cannot be retrieved in end-of-day quality (EOD).. [optional] if omitted the server will use the default value of "DLY"
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20078]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['id'] = \
id
return self.get_notation_status_get_endpoint.call_with_http_info(**kwargs)
def get_notation_status_get_with_http_info_async(
self,
id,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20078, int, typing.MutableMapping]]":
"""Intraday trading status of a notation. # noqa: E501
Intraday trading status of a notation. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
id (str): Identifier of a notation.
Keyword Args:
quality (str): Quality of the trading status. The trading status and related data for a notation cannot be retrieved in end-of-day quality (EOD).. [optional] if omitted the server will use the default value of "DLY"
attributes ([str]): Limit the attributes returned in the response to the specified set.. [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20078, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['id'] = \
id
return self.get_notation_status_get_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_fact_set_identifier_list_by_fact_set_identifier(
self,
body,
**kwargs
) -> InlineResponse20070:
"""Retrieve a list of notations for a given FactSet identifier. # noqa: E501
<p>Retrieve a list of notations for a given FactSet identifier, grouped by regional identifiers, if available. Listings without a regional identifier are grouped at the end of the response.</p><p>The notation corresponding to the security's primary listing has the attributes <big><tt>regional.isPrimary</tt></big> and <big><tt>regional.listing.isPrimary</tt></big> both set to true.The security's primary listing might not be among the results depending on the entitlement.</p><p>See the group description for more information about the security's primary listing.</p> # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
body (InlineObject19):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20070
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['body'] = \
body
return self.post_notation_cross_reference_fact_set_identifier_list_by_fact_set_identifier_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_fact_set_identifier_list_by_fact_set_identifier_with_http_info(
self,
body,
**kwargs
) -> typing.Tuple[InlineResponse20070, int, typing.MutableMapping]:
"""Retrieve a list of notations for a given FactSet identifier. # noqa: E501
<p>Retrieve a list of notations for a given FactSet identifier, grouped by regional identifiers, if available. Listings without a regional identifier are grouped at the end of the response.</p><p>The notation corresponding to the security's primary listing has the attributes <big><tt>regional.isPrimary</tt></big> and <big><tt>regional.listing.isPrimary</tt></big> both set to true.The security's primary listing might not be among the results depending on the entitlement.</p><p>See the group description for more information about the security's primary listing.</p> # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
body (InlineObject19):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20070
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['body'] = \
body
return self.post_notation_cross_reference_fact_set_identifier_list_by_fact_set_identifier_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_fact_set_identifier_list_by_fact_set_identifier_async(
self,
body,
**kwargs
) -> "ApplyResult[InlineResponse20070]":
"""Retrieve a list of notations for a given FactSet identifier. # noqa: E501
<p>Retrieve a list of notations for a given FactSet identifier, grouped by regional identifiers, if available. Listings without a regional identifier are grouped at the end of the response.</p><p>The notation corresponding to the security's primary listing has the attributes <big><tt>regional.isPrimary</tt></big> and <big><tt>regional.listing.isPrimary</tt></big> both set to true.The security's primary listing might not be among the results depending on the entitlement.</p><p>See the group description for more information about the security's primary listing.</p> # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
body (InlineObject19):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20070]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['body'] = \
body
return self.post_notation_cross_reference_fact_set_identifier_list_by_fact_set_identifier_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_fact_set_identifier_list_by_fact_set_identifier_with_http_info_async(
self,
body,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20070, int, typing.MutableMapping]]":
"""Retrieve a list of notations for a given FactSet identifier. # noqa: E501
<p>Retrieve a list of notations for a given FactSet identifier, grouped by regional identifiers, if available. Listings without a regional identifier are grouped at the end of the response.</p><p>The notation corresponding to the security's primary listing has the attributes <big><tt>regional.isPrimary</tt></big> and <big><tt>regional.listing.isPrimary</tt></big> both set to true.The security's primary listing might not be among the results depending on the entitlement.</p><p>See the group description for more information about the security's primary listing.</p> # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
body (InlineObject19):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20070, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['body'] = \
body
return self.post_notation_cross_reference_fact_set_identifier_list_by_fact_set_identifier_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_fact_set_identifier_list_by_instrument(
self,
body,
**kwargs
) -> InlineResponse20071:
"""Retrieve a list of FactSet identifiers for a given instrument. # noqa: E501
<p>Retrieve a list of FactSet identifiers for a given instrument, grouped by regional identifiers, if available. Listings without a regional identifier are grouped at the end of the response.</p><p>The notation corresponding to the security's primary listing has the attributes <big><tt>regional.isPrimary</tt></big> and <big><tt>regional.listing.isPrimary</tt></big> both set to true.The security's primary listing might not be among the results depending on the entitlement.</p><p>The result contains only notations that have at least one FactSet identifier (see <big><tt>listing.permanentIdentifier</tt></big>, <big><tt>listing.tickerExchange</tt></big>).</p><p>See the group description for more information about the security's primary listing.</p> # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
body (InlineObject20):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20071
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['body'] = \
body
return self.post_notation_cross_reference_fact_set_identifier_list_by_instrument_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_fact_set_identifier_list_by_instrument_with_http_info(
self,
body,
**kwargs
) -> typing.Tuple[InlineResponse20071, int, typing.MutableMapping]:
"""Retrieve a list of FactSet identifiers for a given instrument. # noqa: E501
<p>Retrieve a list of FactSet identifiers for a given instrument, grouped by regional identifiers, if available. Listings without a regional identifier are grouped at the end of the response.</p><p>The notation corresponding to the security's primary listing has the attributes <big><tt>regional.isPrimary</tt></big> and <big><tt>regional.listing.isPrimary</tt></big> both set to true.The security's primary listing might not be among the results depending on the entitlement.</p><p>The result contains only notations that have at least one FactSet identifier (see <big><tt>listing.permanentIdentifier</tt></big>, <big><tt>listing.tickerExchange</tt></big>).</p><p>See the group description for more information about the security's primary listing.</p> # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
body (InlineObject20):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20071
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['body'] = \
body
return self.post_notation_cross_reference_fact_set_identifier_list_by_instrument_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_fact_set_identifier_list_by_instrument_async(
self,
body,
**kwargs
) -> "ApplyResult[InlineResponse20071]":
"""Retrieve a list of FactSet identifiers for a given instrument. # noqa: E501
<p>Retrieve a list of FactSet identifiers for a given instrument, grouped by regional identifiers, if available. Listings without a regional identifier are grouped at the end of the response.</p><p>The notation corresponding to the security's primary listing has the attributes <big><tt>regional.isPrimary</tt></big> and <big><tt>regional.listing.isPrimary</tt></big> both set to true.The security's primary listing might not be among the results depending on the entitlement.</p><p>The result contains only notations that have at least one FactSet identifier (see <big><tt>listing.permanentIdentifier</tt></big>, <big><tt>listing.tickerExchange</tt></big>).</p><p>See the group description for more information about the security's primary listing.</p> # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
body (InlineObject20):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20071]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['body'] = \
body
return self.post_notation_cross_reference_fact_set_identifier_list_by_instrument_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_fact_set_identifier_list_by_instrument_with_http_info_async(
self,
body,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20071, int, typing.MutableMapping]]":
"""Retrieve a list of FactSet identifiers for a given instrument. # noqa: E501
<p>Retrieve a list of FactSet identifiers for a given instrument, grouped by regional identifiers, if available. Listings without a regional identifier are grouped at the end of the response.</p><p>The notation corresponding to the security's primary listing has the attributes <big><tt>regional.isPrimary</tt></big> and <big><tt>regional.listing.isPrimary</tt></big> both set to true.The security's primary listing might not be among the results depending on the entitlement.</p><p>The result contains only notations that have at least one FactSet identifier (see <big><tt>listing.permanentIdentifier</tt></big>, <big><tt>listing.tickerExchange</tt></big>).</p><p>See the group description for more information about the security's primary listing.</p> # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
body (InlineObject20):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20071, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['body'] = \
body
return self.post_notation_cross_reference_fact_set_identifier_list_by_instrument_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_instrument(
self,
**kwargs
) -> InlineResponse20067:
"""List of entitled notations. # noqa: E501
List of entitled notations. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Keyword Args:
body (InlineObject16): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20067
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
return self.post_notation_cross_reference_list_by_instrument_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_instrument_with_http_info(
self,
**kwargs
) -> typing.Tuple[InlineResponse20067, int, typing.MutableMapping]:
"""List of entitled notations. # noqa: E501
List of entitled notations. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Keyword Args:
body (InlineObject16): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20067
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
return self.post_notation_cross_reference_list_by_instrument_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_instrument_async(
self,
**kwargs
) -> "ApplyResult[InlineResponse20067]":
"""List of entitled notations. # noqa: E501
List of entitled notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Keyword Args:
body (InlineObject16): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20067]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
return self.post_notation_cross_reference_list_by_instrument_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_instrument_with_http_info_async(
self,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20067, int, typing.MutableMapping]]":
"""List of entitled notations. # noqa: E501
List of entitled notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Keyword Args:
body (InlineObject16): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20067, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
return self.post_notation_cross_reference_list_by_instrument_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_isin(
self,
**kwargs
) -> InlineResponse20067:
"""List of entitled notations. # noqa: E501
List of entitled notations. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Keyword Args:
body (InlineObject17): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20067
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
return self.post_notation_cross_reference_list_by_isin_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_isin_with_http_info(
self,
**kwargs
) -> typing.Tuple[InlineResponse20067, int, typing.MutableMapping]:
"""List of entitled notations. # noqa: E501
List of entitled notations. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Keyword Args:
body (InlineObject17): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20067
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
return self.post_notation_cross_reference_list_by_isin_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_isin_async(
self,
**kwargs
) -> "ApplyResult[InlineResponse20067]":
"""List of entitled notations. # noqa: E501
List of entitled notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Keyword Args:
body (InlineObject17): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20067]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
return self.post_notation_cross_reference_list_by_isin_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_isin_with_http_info_async(
self,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20067, int, typing.MutableMapping]]":
"""List of entitled notations. # noqa: E501
List of entitled notations. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Keyword Args:
body (InlineObject17): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20067, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
return self.post_notation_cross_reference_list_by_isin_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_symbol(
self,
**kwargs
) -> InlineResponse20068:
"""List of entitled notations. # noqa: E501
List of entitled notations. Symbols are not globally unique; therefore, a given symbol interpreted in different markets might refer to different instruments. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Keyword Args:
body (InlineObject18): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20068
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
return self.post_notation_cross_reference_list_by_symbol_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_symbol_with_http_info(
self,
**kwargs
) -> typing.Tuple[InlineResponse20068, int, typing.MutableMapping]:
"""List of entitled notations. # noqa: E501
List of entitled notations. Symbols are not globally unique; therefore, a given symbol interpreted in different markets might refer to different instruments. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Keyword Args:
body (InlineObject18): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20068
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
return self.post_notation_cross_reference_list_by_symbol_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_symbol_async(
self,
**kwargs
) -> "ApplyResult[InlineResponse20068]":
"""List of entitled notations. # noqa: E501
List of entitled notations. Symbols are not globally unique; therefore, a given symbol interpreted in different markets might refer to different instruments. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Keyword Args:
body (InlineObject18): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20068]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
return self.post_notation_cross_reference_list_by_symbol_endpoint.call_with_http_info(**kwargs)
def post_notation_cross_reference_list_by_symbol_with_http_info_async(
self,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20068, int, typing.MutableMapping]]":
"""List of entitled notations. # noqa: E501
List of entitled notations. Symbols are not globally unique; therefore, a given symbol interpreted in different markets might refer to different instruments. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Keyword Args:
body (InlineObject18): [optional]
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20068, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
return self.post_notation_cross_reference_list_by_symbol_endpoint.call_with_http_info(**kwargs)
def post_notation_search_by_text(
self,
body,
**kwargs
) -> InlineResponse20089:
"""Text-based search for notations. # noqa: E501
Text-based search for notations in selected identifier and name attributes according to a tolerant full-text match algorithm. The results satisfy all selected filters; sorting by various attributes is possible. If more than one notation of an instrument matches the parameters, and no market priority has been specified, only the notation with the highest trading volume, averaged over one month, is considered. The result is limited to 10000 notations. All identifiers used as parameters must be valid and entitled. # noqa: E501
This method makes a synchronous HTTP request. Returns the http data only
Args:
body (InlineObject22):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20089
Response Object
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=False)
kwargs['body'] = \
body
return self.post_notation_search_by_text_endpoint.call_with_http_info(**kwargs)
def post_notation_search_by_text_with_http_info(
self,
body,
**kwargs
) -> typing.Tuple[InlineResponse20089, int, typing.MutableMapping]:
"""Text-based search for notations. # noqa: E501
Text-based search for notations in selected identifier and name attributes according to a tolerant full-text match algorithm. The results satisfy all selected filters; sorting by various attributes is possible. If more than one notation of an instrument matches the parameters, and no market priority has been specified, only the notation with the highest trading volume, averaged over one month, is considered. The result is limited to 10000 notations. All identifiers used as parameters must be valid and entitled. # noqa: E501
This method makes a synchronous HTTP request. Returns http data, http status and headers
Args:
body (InlineObject22):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
InlineResponse20089
Response Object
int
Http Status Code
dict
Dictionary of the response headers
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=False)
kwargs['body'] = \
body
return self.post_notation_search_by_text_endpoint.call_with_http_info(**kwargs)
def post_notation_search_by_text_async(
self,
body,
**kwargs
) -> "ApplyResult[InlineResponse20089]":
"""Text-based search for notations. # noqa: E501
Text-based search for notations in selected identifier and name attributes according to a tolerant full-text match algorithm. The results satisfy all selected filters; sorting by various attributes is possible. If more than one notation of an instrument matches the parameters, and no market priority has been specified, only the notation with the highest trading volume, averaged over one month, is considered. The result is limited to 10000 notations. All identifiers used as parameters must be valid and entitled. # noqa: E501
This method makes a asynchronous HTTP request. Returns the http data, wrapped in ApplyResult
Args:
body (InlineObject22):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[InlineResponse20089]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=True, async_req=True)
kwargs['body'] = \
body
return self.post_notation_search_by_text_endpoint.call_with_http_info(**kwargs)
def post_notation_search_by_text_with_http_info_async(
self,
body,
**kwargs
) -> "ApplyResult[typing.Tuple[InlineResponse20089, int, typing.MutableMapping]]":
"""Text-based search for notations. # noqa: E501
Text-based search for notations in selected identifier and name attributes according to a tolerant full-text match algorithm. The results satisfy all selected filters; sorting by various attributes is possible. If more than one notation of an instrument matches the parameters, and no market priority has been specified, only the notation with the highest trading volume, averaged over one month, is considered. The result is limited to 10000 notations. All identifiers used as parameters must be valid and entitled. # noqa: E501
This method makes a asynchronous HTTP request. Returns http data, http status and headers, wrapped in ApplyResult
Args:
body (InlineObject22):
Keyword Args:
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
Returns:
ApplyResult[(InlineResponse20089, int, typing.Dict)]
"""
self.apply_kwargs_defaults(kwargs=kwargs, return_http_data_only=False, async_req=True)
kwargs['body'] = \
body
return self.post_notation_search_by_text_endpoint.call_with_http_info(**kwargs)
| 47.89899 | 1,302 | 0.576661 | 41,771 | 369,876 | 4.962438 | 0.013742 | 0.030219 | 0.021825 | 0.021265 | 0.981099 | 0.978407 | 0.969458 | 0.965072 | 0.958767 | 0.954483 | 0 | 0.012372 | 0.358426 | 369,876 | 7,721 | 1,303 | 47.905194 | 0.861135 | 0.599541 | 0 | 0.75722 | 0 | 0 | 0.159256 | 0.05082 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037448 | false | 0 | 0.009838 | 0 | 0.084418 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b927a34181ff941aecc4d00062213ffe09602b8a | 27,816 | py | Python | Tests/test_SearchIO_hmmer3_domtab.py | adamnovak/biopython | 92772dd6add33e0b87ab593841f924f0f6f16090 | [
"PostgreSQL"
] | 2 | 2020-08-27T08:45:14.000Z | 2020-11-14T02:15:32.000Z | Tests/test_SearchIO_hmmer3_domtab.py | adamnovak/biopython | 92772dd6add33e0b87ab593841f924f0f6f16090 | [
"PostgreSQL"
] | null | null | null | Tests/test_SearchIO_hmmer3_domtab.py | adamnovak/biopython | 92772dd6add33e0b87ab593841f924f0f6f16090 | [
"PostgreSQL"
] | null | null | null | # Copyright 2012 by Wibowo Arindrarto. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""Tests for SearchIO HmmerIO hmmer3-domtab parsers."""
import os
import unittest
from Bio import BiopythonExperimentalWarning
import warnings
with warnings.catch_warnings():
warnings.simplefilter('ignore', BiopythonExperimentalWarning)
from Bio.SearchIO import parse
# test case files are in the Blast directory
TEST_DIR = 'Hmmer'
def get_file(filename):
"""Returns the path of a test file."""
return os.path.join(TEST_DIR, filename)
class HmmscanCases(unittest.TestCase):
fmt = 'hmmscan3-domtab'
def test_domtab_31b1_hmmscan_001(self):
"Test parsing hmmscan-domtab, hmmscan 3.1b1, multiple queries (domtab_31b1_hmmscan_001)"
tab_file = get_file('domtab_31b1_hmmscan_001.out')
qresults = list(parse(tab_file, self.fmt))
self.assertEqual(4, len(qresults))
# first qresult, first hit, first hsp
qresult = qresults[0]
self.assertEqual(1, len(qresult))
self.assertEqual('gi|4885477|ref|NP_005359.1|', qresult.id)
self.assertEqual('-', qresult.accession)
self.assertEqual(154, qresult.seq_len)
hit = qresult[0]
self.assertEqual(1, len(hit))
self.assertEqual('Globin', hit.id)
self.assertEqual('gi|4885477|ref|NP_005359.1|', hit.query_id)
self.assertEqual('PF00042.17', hit.accession)
self.assertEqual(110, hit.seq_len)
self.assertEqual(1e-22, hit.evalue)
self.assertEqual(80.5, hit.bitscore)
self.assertEqual(0.3, hit.bias)
self.assertEqual('Globin', hit.description)
hsp = hit.hsps[0]
self.assertEqual('Globin', hsp.hit_id)
self.assertEqual('gi|4885477|ref|NP_005359.1|', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(1.1e-26, hsp.evalue_cond)
self.assertEqual(1.6e-22, hsp.evalue)
self.assertEqual(79.8, hsp.bitscore)
self.assertEqual(0.3, hsp.bias)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(109, hsp.hit_end)
self.assertEqual(6, hsp.query_start)
self.assertEqual(112, hsp.query_end)
self.assertEqual(6, hsp.env_start)
self.assertEqual(113, hsp.env_end)
self.assertEqual(0.97, hsp.acc_avg)
# last qresult, last hit, last hsp
qresult = qresults[-1]
self.assertEqual(5, len(qresult))
self.assertEqual('gi|125490392|ref|NP_038661.2|', qresult.id)
self.assertEqual('-', qresult.accession)
self.assertEqual(352, qresult.seq_len)
hit = qresult[-1]
self.assertEqual(1, len(hit))
self.assertEqual('DUF521', hit.id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hit.query_id)
self.assertEqual('PF04412.8', hit.accession)
self.assertEqual(400, hit.seq_len)
self.assertEqual(0.15, hit.evalue)
self.assertEqual(10.5, hit.bitscore)
self.assertEqual(0.1, hit.bias)
self.assertEqual('Protein of unknown function (DUF521)', hit.description)
hsp = hit.hsps[0]
self.assertEqual('DUF521', hsp.hit_id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(9.4e-05, hsp.evalue_cond)
self.assertEqual(0.28, hsp.evalue)
self.assertEqual(9.6, hsp.bitscore)
self.assertEqual(0.1, hsp.bias)
self.assertEqual(272, hsp.hit_start)
self.assertEqual(334, hsp.hit_end)
self.assertEqual(220, hsp.query_start)
self.assertEqual(280, hsp.query_end)
self.assertEqual(196, hsp.env_start)
self.assertEqual(294, hsp.env_end)
self.assertEqual(0.77, hsp.acc_avg)
def test_domtab_30_hmmscan_001(self):
"Test parsing hmmscan-domtab, hmmscan 3.0, multiple queries (domtab_30_hmmscan_001)"
tab_file = get_file('domtab_30_hmmscan_001.out')
qresults = parse(tab_file, self.fmt)
counter = 0
# first qresult
qresult = next(qresults)
counter += 1
self.assertEqual(1, len(qresult))
self.assertEqual('gi|4885477|ref|NP_005359.1|', qresult.id)
self.assertEqual('-', qresult.accession)
self.assertEqual(154, qresult.seq_len)
hit = qresult[0]
self.assertEqual(1, len(hit))
self.assertEqual('Globin', hit.id)
self.assertEqual('gi|4885477|ref|NP_005359.1|', hit.query_id)
self.assertEqual('PF00042.17', hit.accession)
self.assertEqual(108, hit.seq_len)
self.assertEqual(6e-21, hit.evalue)
self.assertEqual(74.6, hit.bitscore)
self.assertEqual(0.3, hit.bias)
self.assertEqual('Globin', hit.description)
hsp = hit.hsps[0]
self.assertEqual('Globin', hsp.hit_id)
self.assertEqual('gi|4885477|ref|NP_005359.1|', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(6.7e-25, hsp.evalue_cond)
self.assertEqual(9.2e-21, hsp.evalue)
self.assertEqual(74.0, hsp.bitscore)
self.assertEqual(0.2, hsp.bias)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(107, hsp.hit_end)
self.assertEqual(6, hsp.query_start)
self.assertEqual(112, hsp.query_end)
self.assertEqual(6, hsp.env_start)
self.assertEqual(113, hsp.env_end)
self.assertEqual(0.97, hsp.acc_avg)
# second qresult
qresult = next(qresults)
counter += 1
self.assertEqual(2, len(qresult))
self.assertEqual('gi|126362951:116-221', qresult.id)
self.assertEqual('-', qresult.accession)
self.assertEqual(106, qresult.seq_len)
hit = qresult[0]
self.assertEqual(1, len(hit))
self.assertEqual('Ig_3', hit.id)
self.assertEqual('gi|126362951:116-221', hit.query_id)
self.assertEqual('PF13927.1', hit.accession)
self.assertEqual(75, hit.seq_len)
self.assertEqual(1.4e-09, hit.evalue)
self.assertEqual(38.2, hit.bitscore)
self.assertEqual(0.4, hit.bias)
self.assertEqual('Immunoglobulin domain', hit.description)
hsp = hit.hsps[0]
self.assertEqual('Ig_3', hsp.hit_id)
self.assertEqual('gi|126362951:116-221', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(3e-13, hsp.evalue_cond)
self.assertEqual(2.1e-09, hsp.evalue)
self.assertEqual(37.6, hsp.bitscore)
self.assertEqual(0.3, hsp.bias)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(73, hsp.hit_end)
self.assertEqual(8, hsp.query_start)
self.assertEqual(84, hsp.query_end)
self.assertEqual(8, hsp.env_start)
self.assertEqual(88, hsp.env_end)
self.assertEqual(0.94, hsp.acc_avg)
hit = qresult[1]
self.assertEqual(1, len(hit))
self.assertEqual('Ig_2', hit.id)
self.assertEqual('gi|126362951:116-221', hit.query_id)
self.assertEqual('PF13895.1', hit.accession)
self.assertEqual(80, hit.seq_len)
self.assertEqual(3.5e-05, hit.evalue)
self.assertEqual(23.7, hit.bitscore)
self.assertEqual(0.1, hit.bias)
self.assertEqual('Immunoglobulin domain', hit.description)
hsp = hit.hsps[0]
self.assertEqual('Ig_2', hsp.hit_id)
self.assertEqual('gi|126362951:116-221', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(6.2e-09, hsp.evalue_cond)
self.assertEqual(4.3e-05, hsp.evalue)
self.assertEqual(23.4, hsp.bitscore)
self.assertEqual(0.1, hsp.bias)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(80, hsp.hit_end)
self.assertEqual(8, hsp.query_start)
self.assertEqual(104, hsp.query_end)
self.assertEqual(8, hsp.env_start)
self.assertEqual(104, hsp.env_end)
self.assertEqual(0.71, hsp.acc_avg)
# third qresult
qresult = next(qresults)
counter += 1
self.assertEqual(2, len(qresult))
self.assertEqual('gi|22748937|ref|NP_065801.1|', qresult.id)
self.assertEqual('-', qresult.accession)
self.assertEqual(1204, qresult.seq_len)
hit = qresult[0]
self.assertEqual(2, len(hit))
self.assertEqual('Xpo1', hit.id)
self.assertEqual('gi|22748937|ref|NP_065801.1|', hit.query_id)
self.assertEqual('PF08389.7', hit.accession)
self.assertEqual(148, hit.seq_len)
self.assertEqual(7.8e-34, hit.evalue)
self.assertEqual(116.6, hit.bitscore)
self.assertEqual(7.8, hit.bias)
self.assertEqual('Exportin 1-like protein', hit.description)
hsp = hit.hsps[0]
self.assertEqual('Xpo1', hsp.hit_id)
self.assertEqual('gi|22748937|ref|NP_065801.1|', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(1.6e-37, hsp.evalue_cond)
self.assertEqual(1.1e-33, hsp.evalue)
self.assertEqual(116.1, hsp.bitscore)
self.assertEqual(3.4, hsp.bias)
self.assertEqual(1, hsp.hit_start)
self.assertEqual(148, hsp.hit_end)
self.assertEqual(109, hsp.query_start)
self.assertEqual(271, hsp.query_end)
self.assertEqual(108, hsp.env_start)
self.assertEqual(271, hsp.env_end)
self.assertEqual(0.98, hsp.acc_avg)
hsp = hit.hsps[1]
self.assertEqual('Xpo1', hsp.hit_id)
self.assertEqual('gi|22748937|ref|NP_065801.1|', hsp.query_id)
self.assertEqual(2, hsp.domain_index)
self.assertEqual(0.35, hsp.evalue_cond)
self.assertEqual(2.4e+03, hsp.evalue)
self.assertEqual(-1.8, hsp.bitscore)
self.assertEqual(0.0, hsp.bias)
self.assertEqual(111, hsp.hit_start)
self.assertEqual(139, hsp.hit_end)
self.assertEqual(498, hsp.query_start)
self.assertEqual(525, hsp.query_end)
self.assertEqual(495, hsp.env_start)
self.assertEqual(529, hsp.env_end)
self.assertEqual(0.86, hsp.acc_avg)
# next hit in the third qresult
hit = qresult[1]
self.assertEqual(2, len(hit))
self.assertEqual('IBN_N', hit.id)
self.assertEqual('gi|22748937|ref|NP_065801.1|', hit.query_id)
self.assertEqual('PF03810.14', hit.accession)
self.assertEqual(77, hit.seq_len)
self.assertEqual(0.0039, hit.evalue)
self.assertEqual(16.9, hit.bitscore)
self.assertEqual(0.0, hit.bias)
self.assertEqual('Importin-beta N-terminal domain', hit.description)
hsp = hit.hsps[0]
self.assertEqual('IBN_N', hsp.hit_id)
self.assertEqual('gi|22748937|ref|NP_065801.1|', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(4.8e-06, hsp.evalue_cond)
self.assertEqual(0.033, hsp.evalue)
self.assertEqual(14.0, hsp.bitscore)
self.assertEqual(0.0, hsp.bias)
self.assertEqual(3, hsp.hit_start)
self.assertEqual(75, hsp.hit_end)
self.assertEqual(35, hsp.query_start)
self.assertEqual(98, hsp.query_end)
self.assertEqual(32, hsp.env_start)
self.assertEqual(100, hsp.env_end)
self.assertEqual(0.87, hsp.acc_avg)
hsp = hit.hsps[1]
self.assertEqual('IBN_N', hsp.hit_id)
self.assertEqual('gi|22748937|ref|NP_065801.1|', hsp.query_id)
self.assertEqual(2, hsp.domain_index)
self.assertEqual(1.2, hsp.evalue_cond)
self.assertEqual(8e+03, hsp.evalue)
self.assertEqual(-3.3, hsp.bitscore)
self.assertEqual(0.0, hsp.bias)
self.assertEqual(56, hsp.hit_start)
self.assertEqual(75, hsp.hit_end)
self.assertEqual(167, hsp.query_start)
self.assertEqual(186, hsp.query_end)
self.assertEqual(164, hsp.env_start)
self.assertEqual(187, hsp.env_end)
self.assertEqual(0.85, hsp.acc_avg)
# fourth qresult
qresult = next(qresults)
counter += 1
self.assertEqual(5, len(qresult))
self.assertEqual('gi|125490392|ref|NP_038661.2|', qresult.id)
self.assertEqual('-', qresult.accession)
self.assertEqual(352, qresult.seq_len)
# first hit
hit = qresult[0]
self.assertEqual(1, len(hit))
self.assertEqual('Pou', hit.id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hit.query_id)
self.assertEqual('PF00157.12', hit.accession)
self.assertEqual(75, hit.seq_len)
self.assertEqual(7e-37, hit.evalue)
self.assertEqual(124.8, hit.bitscore)
self.assertEqual(0.5, hit.bias)
self.assertEqual('Pou domain - N-terminal to homeobox domain', hit.description)
hsp = hit.hsps[0]
self.assertEqual('Pou', hsp.hit_id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(5e-40, hsp.evalue_cond)
self.assertEqual(1.4e-36, hsp.evalue)
self.assertEqual(123.9, hsp.bitscore)
self.assertEqual(0.3, hsp.bias)
self.assertEqual(2, hsp.hit_start)
self.assertEqual(75, hsp.hit_end)
self.assertEqual(132, hsp.query_start)
self.assertEqual(205, hsp.query_end)
self.assertEqual(130, hsp.env_start)
self.assertEqual(205, hsp.env_end)
self.assertEqual(0.97, hsp.acc_avg)
# second hit
hit = qresult[1]
self.assertEqual(1, len(hit))
self.assertEqual('Homeobox', hit.id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hit.query_id)
self.assertEqual('PF00046.24', hit.accession)
self.assertEqual(57, hit.seq_len)
self.assertEqual(2.1e-18, hit.evalue)
self.assertEqual(65.5, hit.bitscore)
self.assertEqual(1.1, hit.bias)
self.assertEqual('Homeobox domain', hit.description)
hsp = hit.hsps[0]
self.assertEqual('Homeobox', hsp.hit_id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(1.5e-21, hsp.evalue_cond)
self.assertEqual(4.1e-18, hsp.evalue)
self.assertEqual(64.6, hsp.bitscore)
self.assertEqual(0.7, hsp.bias)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(57, hsp.hit_end)
self.assertEqual(223, hsp.query_start)
self.assertEqual(280, hsp.query_end)
self.assertEqual(223, hsp.env_start)
self.assertEqual(280, hsp.env_end)
self.assertEqual(0.98, hsp.acc_avg)
# third hit
hit = qresult[2]
self.assertEqual(2, len(hit))
self.assertEqual('HTH_31', hit.id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hit.query_id)
self.assertEqual('PF13560.1', hit.accession)
self.assertEqual(64, hit.seq_len)
self.assertEqual(0.012, hit.evalue)
self.assertEqual(15.6, hit.bitscore)
self.assertEqual(0.0, hit.bias)
self.assertEqual('Helix-turn-helix domain', hit.description)
hsp = hit.hsps[0]
self.assertEqual('HTH_31', hsp.hit_id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(5.7e-05, hsp.evalue_cond)
self.assertEqual(0.16, hsp.evalue)
self.assertEqual(12.0, hsp.bitscore)
self.assertEqual(0.0, hsp.bias)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(35, hsp.hit_end)
self.assertEqual(140, hsp.query_start)
self.assertEqual(181, hsp.query_end)
self.assertEqual(140, hsp.env_start)
self.assertEqual(184, hsp.env_end)
self.assertEqual(0.96, hsp.acc_avg)
hsp = hit.hsps[1]
self.assertEqual('HTH_31', hsp.hit_id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hsp.query_id)
self.assertEqual(2, hsp.domain_index)
self.assertEqual(0.19, hsp.evalue_cond)
self.assertEqual(5.2e+02, hsp.evalue)
self.assertEqual(0.8, hsp.bitscore)
self.assertEqual(0.0, hsp.bias)
self.assertEqual(38, hsp.hit_start)
self.assertEqual(62, hsp.hit_end)
self.assertEqual(244, hsp.query_start)
self.assertEqual(268, hsp.query_end)
self.assertEqual(242, hsp.env_start)
self.assertEqual(270, hsp.env_end)
self.assertEqual(0.86, hsp.acc_avg)
# fourth hit
hit = qresult[3]
self.assertEqual(1, len(hit))
self.assertEqual('Homeobox_KN', hit.id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hit.query_id)
self.assertEqual('PF05920.6', hit.accession)
self.assertEqual(40, hit.seq_len)
self.assertEqual(0.039, hit.evalue)
self.assertEqual(13.5, hit.bitscore)
self.assertEqual(0.0, hit.bias)
self.assertEqual('Homeobox KN domain', hit.description)
hsp = hit.hsps[0]
self.assertEqual('Homeobox_KN', hsp.hit_id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(3.5e-05, hsp.evalue_cond)
self.assertEqual(0.095, hsp.evalue)
self.assertEqual(12.3, hsp.bitscore)
self.assertEqual(0.0, hsp.bias)
self.assertEqual(6, hsp.hit_start)
self.assertEqual(39, hsp.hit_end)
self.assertEqual(243, hsp.query_start)
self.assertEqual(276, hsp.query_end)
self.assertEqual(240, hsp.env_start)
self.assertEqual(277, hsp.env_end)
self.assertEqual(0.91, hsp.acc_avg)
# fifth hit
hit = qresult[4]
self.assertEqual(1, len(hit))
self.assertEqual('DUF521', hit.id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hit.query_id)
self.assertEqual('PF04412.8', hit.accession)
self.assertEqual(400, hit.seq_len)
self.assertEqual(0.14, hit.evalue)
self.assertEqual(10.5, hit.bitscore)
self.assertEqual(0.1, hit.bias)
self.assertEqual('Protein of unknown function (DUF521)', hit.description)
hsp = hit.hsps[0]
self.assertEqual('DUF521', hsp.hit_id)
self.assertEqual('gi|125490392|ref|NP_038661.2|', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(9.4e-05, hsp.evalue_cond)
self.assertEqual(0.26, hsp.evalue)
self.assertEqual(9.6, hsp.bitscore)
self.assertEqual(0.1, hsp.bias)
self.assertEqual(272, hsp.hit_start)
self.assertEqual(334, hsp.hit_end)
self.assertEqual(220, hsp.query_start)
self.assertEqual(280, hsp.query_end)
self.assertEqual(196, hsp.env_start)
self.assertEqual(294, hsp.env_end)
self.assertEqual(0.77, hsp.acc_avg)
# test if we've properly finished iteration
self.assertRaises(StopIteration, next, qresults)
self.assertEqual(4, counter)
def test_domtab_30_hmmscan_002(self):
"Test parsing hmmscan-domtab, hmmscan 3.0, single query, no hits (domtab_30_hmmscan_002)"
tab_file = get_file('domtab_30_hmmscan_002.out')
qresults = parse(tab_file, self.fmt)
self.assertRaises(StopIteration, next, qresults)
def test_domtab_30_hmmscan_003(self):
"Test parsing hmmscan-domtab, hmmscan 3.0, multiple queries (domtab_30_hmmscan_003)"
tab_file = get_file('domtab_30_hmmscan_003.out')
qresults = parse(tab_file, self.fmt)
counter = 0
qresult = next(qresults)
counter += 1
self.assertEqual(1, len(qresult))
self.assertEqual('gi|4885477|ref|NP_005359.1|', qresult.id)
self.assertEqual('-', qresult.accession)
self.assertEqual(154, qresult.seq_len)
hit = qresult[0]
self.assertEqual(1, len(hit))
self.assertEqual('Globin', hit.id)
self.assertEqual('gi|4885477|ref|NP_005359.1|', hit.query_id)
self.assertEqual('PF00042.17', hit.accession)
self.assertEqual(108, hit.seq_len)
self.assertEqual(6e-21, hit.evalue)
self.assertEqual(74.6, hit.bitscore)
self.assertEqual(0.3, hit.bias)
self.assertEqual('Globin', hit.description)
hsp = hit.hsps[0]
self.assertEqual('Globin', hsp.hit_id)
self.assertEqual('gi|4885477|ref|NP_005359.1|', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(6.7e-25, hsp.evalue_cond)
self.assertEqual(9.2e-21, hsp.evalue)
self.assertEqual(74.0, hsp.bitscore)
self.assertEqual(0.2, hsp.bias)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(107, hsp.hit_end)
self.assertEqual(6, hsp.query_start)
self.assertEqual(112, hsp.query_end)
self.assertEqual(6, hsp.env_start)
self.assertEqual(113, hsp.env_end)
self.assertEqual(0.97, hsp.acc_avg)
# test if we've properly finished iteration
self.assertRaises(StopIteration, next, qresults)
self.assertEqual(1, counter)
def test_domtab_30_hmmscan_004(self):
"Test parsing hmmscan-domtab, hmmscan 3.0, multiple queries (domtab_30_hmmscan_004)"
tab_file = get_file('domtab_30_hmmscan_004.out')
qresults = parse(tab_file, self.fmt)
counter = 0
qresult = next(qresults)
counter += 1
self.assertEqual(2, len(qresult))
self.assertEqual('gi|126362951:116-221', qresult.id)
self.assertEqual('-', qresult.accession)
self.assertEqual(106, qresult.seq_len)
hit = qresult[0]
self.assertEqual(1, len(hit))
self.assertEqual('Ig_3', hit.id)
self.assertEqual('gi|126362951:116-221', hit.query_id)
self.assertEqual('PF13927.1', hit.accession)
self.assertEqual(75, hit.seq_len)
self.assertEqual(1.4e-09, hit.evalue)
self.assertEqual(38.2, hit.bitscore)
self.assertEqual(0.4, hit.bias)
self.assertEqual('Immunoglobulin domain', hit.description)
hsp = hit.hsps[0]
self.assertEqual('Ig_3', hsp.hit_id)
self.assertEqual('gi|126362951:116-221', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(3e-13, hsp.evalue_cond)
self.assertEqual(2.1e-09, hsp.evalue)
self.assertEqual(37.6, hsp.bitscore)
self.assertEqual(0.3, hsp.bias)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(73, hsp.hit_end)
self.assertEqual(8, hsp.query_start)
self.assertEqual(84, hsp.query_end)
self.assertEqual(8, hsp.env_start)
self.assertEqual(88, hsp.env_end)
self.assertEqual(0.94, hsp.acc_avg)
hit = qresult[1]
self.assertEqual(1, len(hit))
self.assertEqual('Ig_2', hit.id)
self.assertEqual('gi|126362951:116-221', hit.query_id)
self.assertEqual('PF13895.1', hit.accession)
self.assertEqual(80, hit.seq_len)
self.assertEqual(3.5e-05, hit.evalue)
self.assertEqual(23.7, hit.bitscore)
self.assertEqual(0.1, hit.bias)
self.assertEqual('Immunoglobulin domain', hit.description)
hsp = hit.hsps[0]
self.assertEqual('Ig_2', hsp.hit_id)
self.assertEqual('gi|126362951:116-221', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(6.2e-09, hsp.evalue_cond)
self.assertEqual(4.3e-05, hsp.evalue)
self.assertEqual(23.4, hsp.bitscore)
self.assertEqual(0.1, hsp.bias)
self.assertEqual(0, hsp.hit_start)
self.assertEqual(80, hsp.hit_end)
self.assertEqual(8, hsp.query_start)
self.assertEqual(104, hsp.query_end)
self.assertEqual(8, hsp.env_start)
self.assertEqual(104, hsp.env_end)
self.assertEqual(0.71, hsp.acc_avg)
# test if we've properly finished iteration
self.assertRaises(StopIteration, next, qresults)
self.assertEqual(1, counter)
class HmmersearchCases(unittest.TestCase):
fmt = 'hmmsearch3-domtab'
def test_domtab_31b1_hmmsearch_001(self):
"Test parsing hmmsearch-domtab, hmmsearch 3.1b1, single query (domtab_31b1_hmmsearch_001)"
tab_file = get_file('domtab_31b1_hmmsearch_001.out')
qresults = list(parse(tab_file, self.fmt))
self.assertEqual(1, len(qresults))
qresult = qresults[0]
self.assertEqual('Pkinase', qresult.id)
self.assertEqual('PF00069.17', qresult.accession)
self.assertEqual(260, qresult.seq_len)
hit = qresult[0]
self.assertEqual(2, len(hit))
self.assertEqual('sp|Q9WUT3|KS6A2_MOUSE', hit.id)
self.assertEqual('Pkinase', hit.query_id)
self.assertEqual('-', hit.accession)
self.assertEqual(733, hit.seq_len)
self.assertEqual(8.5e-147, hit.evalue)
self.assertEqual(492.3, hit.bitscore)
self.assertEqual(0.0, hit.bias)
self.assertEqual('Ribosomal protein S6 kinase alpha-2 OS=Mus musculus GN=Rps6ka2 PE=1 SV=1', hit.description)
hsp = hit.hsps[0]
self.assertEqual('sp|Q9WUT3|KS6A2_MOUSE', hsp.hit_id)
self.assertEqual('Pkinase', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(2.6e-75, hsp.evalue_cond)
self.assertEqual(3.6e-70, hsp.evalue)
self.assertEqual(241.2, hsp.bitscore)
self.assertEqual(0.0, hsp.bias)
self.assertEqual(58, hsp.hit_start)
self.assertEqual(318, hsp.hit_end)
self.assertEqual(0, hsp.query_start)
self.assertEqual(260, hsp.query_end)
self.assertEqual(58, hsp.env_start)
self.assertEqual(318, hsp.env_end)
self.assertEqual(0.95, hsp.acc_avg)
def test_domtab_30_hmmsearch_001(self):
"Test parsing hmmsearch-domtab, hmmsearch 3.0, multiple queries (domtab_30_hmmsearch_001)"
tab_file = get_file('domtab_30_hmmsearch_001.out')
qresults = parse(tab_file, self.fmt)
# first qresult
# we only want to check the coordinate switch actually
# so checking the first hsp of the first hit of the qresult is enough
qresult = next(qresults)
self.assertEqual(7, len(qresult))
self.assertEqual('Pkinase', qresult.id)
self.assertEqual('PF00069.17', qresult.accession)
self.assertEqual(260, qresult.seq_len)
hit = qresult[0]
self.assertEqual(2, len(hit))
self.assertEqual('sp|Q9WUT3|KS6A2_MOUSE', hit.id)
self.assertEqual('Pkinase', hit.query_id)
self.assertEqual('-', hit.accession)
self.assertEqual(733, hit.seq_len)
self.assertEqual(8.4e-147, hit.evalue)
self.assertEqual(492.3, hit.bitscore)
self.assertEqual(0.0, hit.bias)
self.assertEqual('Ribosomal protein S6 kinase alpha-2 OS=Mus musculus GN=Rps6ka2 PE=2 SV=1', hit.description)
hsp = hit.hsps[0]
self.assertEqual('sp|Q9WUT3|KS6A2_MOUSE', hsp.hit_id)
self.assertEqual('Pkinase', hsp.query_id)
self.assertEqual(1, hsp.domain_index)
self.assertEqual(4.6e-75, hsp.evalue_cond)
self.assertEqual(3.5e-70, hsp.evalue)
self.assertEqual(241.2, hsp.bitscore)
self.assertEqual(0.0, hsp.bias)
self.assertEqual(58, hsp.hit_start)
self.assertEqual(318, hsp.hit_end)
self.assertEqual(0, hsp.query_start)
self.assertEqual(260, hsp.query_end)
self.assertEqual(58, hsp.env_start)
self.assertEqual(318, hsp.env_end)
self.assertEqual(0.95, hsp.acc_avg)
if __name__ == "__main__":
runner = unittest.TextTestRunner(verbosity = 2)
unittest.main(testRunner=runner)
| 42.662577 | 117 | 0.648763 | 3,763 | 27,816 | 4.68456 | 0.090088 | 0.405888 | 0.081007 | 0.041979 | 0.878262 | 0.795212 | 0.769061 | 0.745292 | 0.729635 | 0.708816 | 0 | 0.085324 | 0.223469 | 27,816 | 651 | 118 | 42.728111 | 0.730787 | 0.051014 | 0 | 0.67354 | 0 | 0 | 0.105726 | 0.04858 | 0 | 0 | 0 | 0 | 0.82646 | 1 | 0.013746 | false | 0 | 0.010309 | 0 | 0.032646 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b9c030214309c21c63ee4a5a902904b7a8a7c463 | 11,038 | py | Python | tests/timing/test_throttling.py | ankitdobhal/kopf | 2765eda2a08e7e42195446cc23f02ba91603db53 | [
"MIT"
] | null | null | null | tests/timing/test_throttling.py | ankitdobhal/kopf | 2765eda2a08e7e42195446cc23f02ba91603db53 | [
"MIT"
] | null | null | null | tests/timing/test_throttling.py | ankitdobhal/kopf | 2765eda2a08e7e42195446cc23f02ba91603db53 | [
"MIT"
] | null | null | null | import asyncio
import logging
from unittest.mock import call
import pytest
from kopf.reactor.effects import throttled
from kopf.structs.containers import Throttler
@pytest.fixture(autouse=True)
def clock(mocker):
return mocker.patch('time.monotonic', return_value=0)
@pytest.fixture(autouse=True)
def sleep(mocker):
return mocker.patch('kopf.structs.primitives.sleep_or_wait', return_value=None)
async def test_remains_inactive_on_success():
logger = logging.getLogger()
throttler = Throttler()
async with throttled(throttler=throttler, logger=logger, delays=[123]):
pass
assert throttler.source_of_delays is None
assert throttler.last_used_delay is None
@pytest.mark.parametrize('exc_cls, kwargs', [
pytest.param(BaseException, dict(), id='none'),
pytest.param(BaseException, dict(errors=BaseException), id='base'),
pytest.param(Exception, dict(errors=ValueError), id='child'),
pytest.param(RuntimeError, dict(errors=ValueError), id='sibling'),
pytest.param(RuntimeError, dict(errors=(ValueError, TypeError)), id='tuple'),
])
async def test_escalates_unexpected_errors(exc_cls, kwargs):
logger = logging.getLogger()
throttler = Throttler()
with pytest.raises(exc_cls):
async with throttled(throttler=throttler, logger=logger, delays=[123], **kwargs):
raise exc_cls()
@pytest.mark.parametrize('exc_cls, kwargs', [
pytest.param(Exception, dict(), id='none'),
pytest.param(RuntimeError, dict(errors=Exception), id='parent'),
pytest.param(RuntimeError, dict(errors=(RuntimeError, EnvironmentError)), id='tuple'),
])
async def test_activates_on_expected_errors(exc_cls, kwargs):
logger = logging.getLogger()
throttler = Throttler()
async with throttled(throttler=throttler, logger=logger, delays=[123], **kwargs):
raise exc_cls()
assert throttler.source_of_delays is not None
assert throttler.last_used_delay is not None
async def test_sleeps_for_the_first_delay_when_inactive(sleep):
logger = logging.getLogger()
throttler = Throttler()
async with throttled(throttler=throttler, logger=logger, delays=[123, 234]):
raise Exception()
assert throttler.last_used_delay == 123
assert throttler.source_of_delays is not None
assert next(throttler.source_of_delays) == 234
assert throttler.active_until is None # means: no sleep time left
assert sleep.mock_calls == [call(123, wakeup=None)]
async def test_sleeps_for_the_next_delay_when_active(sleep):
logger = logging.getLogger()
throttler = Throttler()
async with throttled(throttler=throttler, logger=logger, delays=[123, 234]):
raise Exception()
sleep.reset_mock()
async with throttled(throttler=throttler, logger=logger, delays=[...]):
raise Exception()
assert throttler.last_used_delay == 234
assert throttler.source_of_delays is not None
assert next(throttler.source_of_delays, 999) == 999
assert throttler.active_until is None # means: no sleep time left
assert sleep.mock_calls == [call(234, wakeup=None)]
async def test_sleeps_for_the_last_known_delay_when_depleted(sleep):
logger = logging.getLogger()
throttler = Throttler()
async with throttled(throttler=throttler, logger=logger, delays=[123, 234]):
raise Exception()
async with throttled(throttler=throttler, logger=logger, delays=[...]):
raise Exception()
sleep.reset_mock()
async with throttled(throttler=throttler, logger=logger, delays=[...]):
raise Exception()
assert throttler.last_used_delay == 234
assert throttler.source_of_delays is not None
assert next(throttler.source_of_delays, 999) == 999
assert throttler.active_until is None # means: no sleep time left
assert sleep.mock_calls == [call(234, wakeup=None)]
async def test_resets_on_success(sleep):
logger = logging.getLogger()
throttler = Throttler()
async with throttled(throttler=throttler, logger=logger, delays=[123]):
raise Exception()
sleep.reset_mock()
async with throttled(throttler=throttler, logger=logger, delays=[...]):
pass
assert throttler.last_used_delay is None
assert throttler.source_of_delays is None
assert throttler.active_until is None
assert sleep.mock_calls == []
async def test_skips_on_no_delays(sleep):
logger = logging.getLogger()
throttler = Throttler()
async with throttled(throttler=throttler, logger=logger, delays=[]):
raise Exception()
assert throttler.last_used_delay is None
assert throttler.source_of_delays is not None
assert next(throttler.source_of_delays, 999) == 999
assert throttler.active_until is None # means: no sleep time left
assert sleep.mock_calls == []
async def test_renews_on_repeated_failure(sleep):
logger = logging.getLogger()
throttler = Throttler()
async with throttled(throttler=throttler, logger=logger, delays=[123]):
raise Exception()
async with throttled(throttler=throttler, logger=logger, delays=[...]):
pass
sleep.reset_mock()
async with throttled(throttler=throttler, logger=logger, delays=[234]):
raise Exception()
assert throttler.last_used_delay is 234
assert throttler.source_of_delays is not None
assert throttler.active_until is None
assert sleep.mock_calls == [call(234, wakeup=None)]
async def test_interruption(clock, sleep):
wakeup = asyncio.Event()
logger = logging.getLogger()
throttler = Throttler()
clock.return_value = 1000 # simulated "now"
sleep.return_value = 55 # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[123, 234], wakeup=wakeup):
raise Exception()
assert throttler.last_used_delay == 123
assert throttler.source_of_delays is not None
assert throttler.active_until == 1123 # means: some sleep time is left
assert sleep.mock_calls == [call(123, wakeup=wakeup)]
async def test_continuation_with_success(clock, sleep):
wakeup = asyncio.Event()
logger = logging.getLogger()
throttler = Throttler()
clock.return_value = 1000 # simulated "now"
sleep.return_value = 55 # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[123, 234], wakeup=wakeup):
raise Exception()
sleep.reset_mock()
clock.return_value = 1077 # simulated "now"
sleep.return_value = None # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[...], wakeup=wakeup):
pass
assert throttler.last_used_delay is None
assert throttler.source_of_delays is None
assert throttler.active_until is None # means: no sleep time is left
assert sleep.mock_calls == [call(123 - 77, wakeup=wakeup)]
async def test_continuation_with_error(clock, sleep):
wakeup = asyncio.Event()
logger = logging.getLogger()
throttler = Throttler()
clock.return_value = 1000 # simulated "now"
sleep.return_value = 55 # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[123, 234], wakeup=wakeup):
raise Exception()
sleep.reset_mock()
clock.return_value = 1077 # simulated "now"
sleep.return_value = None # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[...], wakeup=wakeup):
raise Exception()
assert throttler.last_used_delay == 234
assert throttler.source_of_delays is not None
assert throttler.active_until is None # means: no sleep time is left
assert sleep.mock_calls == [call(123 - 77, wakeup=wakeup), call(234, wakeup=wakeup)]
async def test_continuation_when_overdue(clock, sleep):
wakeup = asyncio.Event()
logger = logging.getLogger()
throttler = Throttler()
clock.return_value = 1000 # simulated "now"
sleep.return_value = 55 # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[123, 234], wakeup=wakeup):
raise Exception()
sleep.reset_mock()
clock.return_value = 2000 # simulated "now"
sleep.return_value = None # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[...], wakeup=wakeup):
raise Exception()
assert throttler.last_used_delay == 234
assert throttler.source_of_delays is not None
assert throttler.active_until is None # means: no sleep time is left
assert sleep.mock_calls == [call(123 - 1000, wakeup=wakeup), call(234, wakeup=wakeup)]
async def test_recommends_running_initially():
logger = logging.getLogger()
throttler = Throttler()
async with throttled(throttler=throttler, logger=logger, delays=[123]) as should_run:
remembered_should_run = should_run
assert remembered_should_run is True
async def test_recommends_skipping_immediately_after_interrupted_error(sleep):
logger = logging.getLogger()
throttler = Throttler()
sleep.return_value = 33 # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[123]):
raise Exception()
sleep.return_value = 33 # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[...]) as should_run:
remembered_should_run = should_run
assert remembered_should_run is False
async def test_recommends_running_immediately_after_continued(sleep):
logger = logging.getLogger()
throttler = Throttler()
sleep.return_value = 33 # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[123]):
raise Exception()
sleep.return_value = None # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[...]) as should_run:
remembered_should_run = should_run
assert remembered_should_run is True
async def test_logging_when_deactivates_immediately(caplog):
caplog.set_level(0)
logger = logging.getLogger()
throttler = Throttler()
async with throttled(throttler=throttler, logger=logger, delays=[123]):
raise Exception()
assert caplog.messages == [
"Throttling for 123 seconds due to an unexpected error:",
"Throttling is over. Switching back to normal operations.",
]
async def test_logging_when_deactivates_on_reentry(sleep, caplog):
caplog.set_level(0)
logger = logging.getLogger()
throttler = Throttler()
sleep.return_value = 55 # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[123]):
raise Exception()
sleep.return_value = None # simulated sleep time left
async with throttled(throttler=throttler, logger=logger, delays=[...]):
pass
assert caplog.messages == [
"Throttling for 123 seconds due to an unexpected error:",
"Throttling is over. Switching back to normal operations.",
]
| 34.710692 | 95 | 0.71888 | 1,372 | 11,038 | 5.629738 | 0.105685 | 0.111859 | 0.069912 | 0.104868 | 0.891378 | 0.860953 | 0.841015 | 0.821854 | 0.797255 | 0.770844 | 0 | 0.023501 | 0.182732 | 11,038 | 317 | 96 | 34.820189 | 0.832724 | 0.06079 | 0 | 0.770563 | 0 | 0 | 0.032985 | 0.003579 | 0 | 0 | 0 | 0 | 0.229437 | 1 | 0.008658 | false | 0.021645 | 0.025974 | 0.008658 | 0.04329 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6a3932392e9d8d8de1021293cefc96a8ee8c13a4 | 44,218 | py | Python | tests/test_routes.py | 2022-Spring-NYU-DevOps-Shopcarts/shopcarts | 722e25cc76e3c7ccf40dbf16456a23e2050bb2d5 | [
"Apache-2.0"
] | 2 | 2022-03-09T10:46:28.000Z | 2022-03-09T10:46:30.000Z | tests/test_routes.py | 2022-Spring-NYU-DevOps-Shopcarts/shopcarts | 722e25cc76e3c7ccf40dbf16456a23e2050bb2d5 | [
"Apache-2.0"
] | 44 | 2022-02-19T11:14:03.000Z | 2022-03-31T22:31:32.000Z | tests/test_routes.py | 2022-Spring-NYU-DevOps-Shopcarts/shopcarts | 722e25cc76e3c7ccf40dbf16456a23e2050bb2d5 | [
"Apache-2.0"
] | 2 | 2022-02-19T11:03:29.000Z | 2022-03-09T01:29:03.000Z | """
TestShopcart API Service Test Suite
Test cases can be run with the following:
nosetests -v --with-spec --spec-color
coverage report -m
"""
import os
import logging
from unittest import TestCase
from unittest.mock import MagicMock, patch
from flask import jsonify
from service.utils import status # HTTP Status Codes
from service.models import db
from service.routes import app, init_db
from tests.factories import ItemFactory
DATABASE_URI = os.getenv(
"DATABASE_URI", "postgresql://postgres:postgres@localhost:5432/testdb"
)
BASE_URL = "/shopcarts"
CONTENT_TYPE_JSON = "application/json"
######################################################################
# T E S T C A S E S
######################################################################
class TestYourResourceServer(TestCase):
""" REST API Server Tests """
@classmethod
def setUpClass(cls):
""" This runs once before the entire test suite """
app.config["TESTING"] = True
app.config["DEBUG"] = False
# Set up the test database
app.config["SQLALCHEMY_DATABASE_URI"] = DATABASE_URI
app.logger.setLevel(logging.CRITICAL)
init_db()
@classmethod
def tearDownClass(cls):
""" This runs once after the entire test suite """
db.session.close()
def setUp(self):
""" This runs before each test """
db.drop_all() # clean up the last tests
db.create_all() # create new tables
self.app = app.test_client()
def tearDown(self):
""" This runs after each test """
db.session.remove()
db.drop_all()
def _create_items(self, count):
"""Factory method to create items in shopcart in bulk"""
shopcart = ItemFactory()
items = [shopcart]
items_serialize = [shopcart.serialize()]
for i in range(count-1):
new_item = ItemFactory(user_id = shopcart.user_id, item_id = shopcart.item_id+i+1)
items.append(new_item)
items_serialize.append(new_item.serialize())
resp = self.app.post(
BASE_URL,
json={"user_id": shopcart.user_id,
"items": items_serialize},
content_type=CONTENT_TYPE_JSON
)
self.assertEqual(
resp.status_code, status.HTTP_201_CREATED, "Could not create test items in shopcart"
)
return items
######################################################################
# P L A C E T E S T C A S E S H E R E
######################################################################
def test_index(self):
""" Test index call """
resp = self.app.get("/")
self.assertEqual(resp.status_code, status.HTTP_200_OK)
def test_method_not_supported(self):
"""Test Method Not Supported"""
resp = self.app.put(BASE_URL, json={}, content_type=CONTENT_TYPE_JSON)
self.assertEqual(resp.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
######################################################################
# TEST LIST SHOPCARTS
######################################################################
def test_get_shopcart_list(self):
"""Get a list of Shopcart"""
shopcart1 = self._create_items(2)
shopcart2 = self._create_items(3)
resp = self.app.get(BASE_URL)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
data = resp.get_json()
self.assertEqual(len(data), 2)
dict1 = {'user_id': shopcart1[0].user_id}
dict2 = {'user_id': shopcart2[0].user_id}
self.assertCountEqual(data, [dict1,dict2])
def test_get_shopcart(self):
"""Get a shopcart"""
# get the items of a shopcart
test_shopcart = self._create_items(3)
resp = self.app.get(
"{0}/{1}".format(BASE_URL, test_shopcart[0].user_id), content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
data = resp.get_json()
self.assertEqual(len(data),len(test_shopcart))
for i in range(len(test_shopcart)):
self.assertEqual(data[i]['item_id'], test_shopcart[i].item_id)
self.assertEqual(data[i]['item_name'], test_shopcart[i].item_name)
self.assertEqual(data[i]['quantity'], test_shopcart[i].quantity)
self.assertEqual(data[i]['price'], test_shopcart[i].price)
def test_get_shopcart_empty(self):
"""Get an empty Shopcart"""
resp = self.app.get("/shopcarts/0")
self.assertEqual(resp.status_code, status.HTTP_200_OK)
self.assertEqual(resp.get_json(), [], "Expect to return an empty list")
def test_get_shopcart_invalid(self):
"""Get a Shopcart with invalid user id"""
resp = self.app.get("/shopcarts/s")
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
######################################################################
# TEST CREATE SHOPCART
######################################################################
def test_create_shopcart(self):
"""Create a new Shopcart"""
resp = self.app.post(
BASE_URL,
json={"user_id": 10023},
content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# Make sure location header is set
location = resp.headers.get("Location", None)
self.assertIsNotNone(location)
# Check the data is correct
new_shopcart = resp.get_json()
self.assertEqual(new_shopcart, [], "Expect to return an empty list")
def test_create_shopcart_with_item(self):
"""Create a new Shopcart with a item"""
shopcart = ItemFactory()
logging.debug(shopcart)
resp = self.app.post(
BASE_URL,
json=shopcart.serialize(),
content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# Make sure location header is set
location = resp.headers.get("Location", None)
self.assertIsNotNone(location)
# Check the data is correct
new_shopcart = resp.get_json()
self.assertEqual(new_shopcart[0]["user_id"], shopcart.user_id, "User IDs do not match")
self.assertEqual(new_shopcart[0]["item_id"], shopcart.item_id, "Item IDs do not match")
self.assertEqual(new_shopcart[0]["item_name"], shopcart.item_name, "Item names do not match")
self.assertEqual(new_shopcart[0]["quantity"], shopcart.quantity, "Quantities do not match")
self.assertEqual(new_shopcart[0]["price"], shopcart.price, "Prices do not match")
# Check that the location header was correct
resp = self.app.get(location, content_type=CONTENT_TYPE_JSON)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
new_shopcart = resp.get_json()
self.assertEqual(new_shopcart[0]["user_id"], shopcart.user_id, "User IDs do not match")
self.assertEqual(new_shopcart[0]["item_id"], shopcart.item_id, "Item IDs do not match")
self.assertEqual(new_shopcart[0]["item_name"], shopcart.item_name, "Item names do not match")
self.assertEqual(new_shopcart[0]["quantity"], shopcart.quantity, "Quantities do not match")
self.assertEqual(new_shopcart[0]["price"], shopcart.price, "Prices do not match")
def test_create_shopcart_with_item_list(self):
"""Create a new Shopcart with a list of items"""
shopcart = ItemFactory()
shopcarts = [shopcart.serialize()]
for i in range(2):
shopcarts.append(ItemFactory(user_id = shopcart.user_id, item_id = shopcart.item_id+i+1).serialize())
logging.debug(shopcarts)
resp = self.app.post(
BASE_URL,
json={"user_id": shopcart.user_id,
"items": shopcarts},
content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# Make sure location header is set
location = resp.headers.get("Location", None)
self.assertIsNotNone(location)
# Check the data is correct
new_shopcart = resp.get_json()
for i in range(3):
self.assertEqual(new_shopcart[i]["user_id"], shopcarts[i]["user_id"], "User IDs do not match")
self.assertEqual(new_shopcart[i]["item_id"], shopcarts[i]["item_id"], "Item IDs do not match")
self.assertEqual(new_shopcart[i]["item_name"], shopcarts[i]["item_name"], "Item names do not match")
self.assertEqual(new_shopcart[i]["quantity"], shopcarts[i]["quantity"], "Quantities do not match")
self.assertEqual(new_shopcart[i]["price"], shopcarts[i]["price"], "Prices do not match")
def test_create_shopcart_no_data(self):
"""Create a Shopcart with missing data"""
resp = self.app.post(BASE_URL, json={}, content_type=CONTENT_TYPE_JSON)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def test_create_shopcart_no_content_type(self):
"""Create a Shopcart with no content type"""
resp = self.app.post(BASE_URL)
self.assertEqual(resp.status_code, status.HTTP_415_UNSUPPORTED_MEDIA_TYPE)
def test_create_shopcart_bad_id(self):
"""Create a Shopcart with bad user ID or bad item ID"""
# change user ID to a string
resp = self.app.post(
BASE_URL,
json={"user_id": "true"},
content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
test_shopcart = ItemFactory()
logging.debug(test_shopcart)
test_shopcart.item_id = "test"
resp = self.app.post(
BASE_URL,
json=test_shopcart.serialize(),
content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def test_create_shopcart_already_exist(self):
"""Create a Shopcart that already has items in it"""
# create a shopcart with an item
test_shopcart = ItemFactory()
logging.debug(test_shopcart)
resp = self.app.post(
BASE_URL,
json=test_shopcart.serialize(),
content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
resp = self.app.post(
BASE_URL,
json=test_shopcart.serialize(),
content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_409_CONFLICT)
def test_create_shopcart_duplicate_item_id(self):
"""Create a Shopcart with duplicate item id"""
item = ItemFactory()
shopcart = [item.serialize(), item.serialize()]
logging.debug(shopcart)
resp = self.app.post(
BASE_URL,
json={"user_id": item.user_id,
"items": shopcart},
content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
######################################################################
# TEST DELETE SHOPCARTS
######################################################################
def test_delete_empty_shopcart(self):
"""Test delete an existing shopcart with no other items added"""
user_id = 10023
resp = self.app.delete(
"/shopcarts/{}".format(user_id),
content_type = "shopcarts/json"
)
self.assertEqual(resp.status_code, status.HTTP_204_NO_CONTENT)
def test_delete_shopcart_with_items(self):
"""Test deleting a shopcart with items added"""
shopcart = self._create_items(3)
resp = self.app.delete(
"/shopcarts/{}".format(shopcart[0].user_id),
content_type = "shopcarts/json"
)
self.assertEqual(resp.status_code, status.HTTP_204_NO_CONTENT)
def test_delete_shopcart_not_found(self):
"""Delete a Shopcart that's not found"""
resp = self.app.delete("/shopcarts/0")
self.assertEqual(resp.status_code, status.HTTP_204_NO_CONTENT)
######################################################################
# TEST CREATE ITEM
######################################################################
def test_create_item(self):
"""Creates an item"""
req = ItemFactory().serialize()
user_id = req.pop("user_id")
url = BASE_URL + "/" + str(user_id) + "/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
new_item = resp.get_json()
self.assertEqual(new_item["user_id"], user_id, "User IDs do not match")
self.assertEqual(new_item["item_name"], req["item_name"], "Item names do not match")
self.assertEqual(new_item["item_id"], req["item_id"], "Item IDs do not match")
self.assertEqual(new_item["quantity"], req["quantity"], "Quantities do not match")
self.assertAlmostEqual(new_item["price"], req["price"], "Prices do not match")
def test_create_item_bad_name(self):
"""Attempts to create an item with non-string name"""
req = ItemFactory().serialize()
user_id = req.pop("user_id")
req["item_name"] = -1
url = BASE_URL + "/" + str(user_id) + "/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def test_create_item_bad_id(self):
"""Attempts to create an item with a negative int id"""
req = ItemFactory().serialize()
user_id = req.pop("user_id")
req["item_id"] = -1
url = BASE_URL + "/" + str(user_id) + "/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def test_create_item_bad_quantity(self):
"""Attempts to create an item with a non-positive int quantity"""
req = ItemFactory().serialize()
user_id = req.pop("user_id")
req["quantity"] = 0
url = BASE_URL + "/" + str(user_id) + "/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def test_create_item_negative_price(self):
"""Attempts to create an item with a negative float price """
req = ItemFactory().serialize()
user_id = req.pop("user_id")
req["price"] = -0.5
url = BASE_URL + "/" + str(user_id) + "/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def test_create_item_bad_price(self):
"""Attempts to create an item with a string as price """
req = ItemFactory().serialize()
user_id = req.pop("user_id")
req["price"] = "foo"
url = BASE_URL + "/" + str(user_id) + "/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def test_crete_item_invalid_shopcartID(self):
"""Attempts to create an item with a negative float shopcart_id in url """
req = ItemFactory().serialize()
user_id = -1.5
url = BASE_URL + "/" + str(user_id) + "/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
def test_create_item_duplicate_id(self):
""" Attempts creating an item with a duplicate ID. """
req = ItemFactory().serialize()
user_id = req.pop("user_id")
url = BASE_URL + "/" + str(user_id) + "/items"
logging.debug(req)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
req2 = ItemFactory(user_id=user_id).serialize()
del req2["user_id"]
logging.debug(req2)
resp = self.app.post(
url, json=req2, content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_409_CONFLICT)
######################################################################
# TEST READ ITEM
######################################################################
def test_read_an_item(self):
"""Read an item in a certain shopcart"""
test_shopcart = self._create_items(1)
resp = self.app.get(
"{0}/{1}/items/{2}".format(BASE_URL, test_shopcart[0].user_id,test_shopcart[0].item_id), content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
data = resp.get_json()
self.assertEqual(data['item_id'], test_shopcart[0].item_id)
self.assertEqual(data['item_name'], test_shopcart[0].item_name)
self.assertEqual(data['quantity'], test_shopcart[0].quantity)
self.assertEqual(data['price'], test_shopcart[0].price)
def test_read_an_item_not_found(self):
"""Read an item thats not found"""
resp = self.app.get("/shopcarts/0/items/0")
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
def test_read_an_item_not_found_Item(self):
"""Read an item thats not found"""
test_shopcart = self._create_items(1)
resp = self.app.get(
"{0}/{1}/items/{2}".format(BASE_URL, test_shopcart[0].user_id,test_shopcart[0].item_id+100), content_type=CONTENT_TYPE_JSON
)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
######################################################################
# TEST UPDATE ITEM
######################################################################
def test_update_item_no_quantity_no_price(self):
""" Attempts updating an item with no quantity and no price in body. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test updating with no quantity no price provided
item_id = req.pop("item_id")
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
req.pop("quantity")
req.pop("price")
req.pop("item_name")
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.put(
new_url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def test_update_item_bad_quantity_zero(self):
""" Attempts updating an item with 0 as quantity. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test updating with a negative quantity
req["quantity"] = 0
req.pop("item_name")
item_id = req.pop("item_id")
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.put(
new_url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def test_update_item_bad_quantity_float(self):
""" Attempts updating an item with a quantity not in integer. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test updating with a negative quantity
req["quantity"] = 3.5
req.pop("item_name")
item_id = req.pop("item_id")
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.put(
new_url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def test_update_item_bad_price(self):
""" Attempts updating an item with a negative price. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test updating with a negative quantity
req["price"] = -1
req.pop("item_name")
item_id = req.pop("item_id")
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.put(
new_url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
def test_update_item_only_price(self):
""" Attempts updating an item with a valid price. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test updating with a valid price
req["price"] = 23
req.pop("quantity")
req.pop("item_name")
item_id = req.pop("item_id")
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.put(
new_url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
def test_update_item_only_quantity(self):
""" Attempts updating an item with a valid quantity. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test updating with a valid quantity
req["quantity"] = 12
req.pop("price")
req.pop("item_name")
item_id = req.pop("item_id")
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.put(
new_url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
def test_update_item_quantity_price(self):
""" Attempts updating an item with a valid quantity and a valid quantity. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test updating with a valid price and a valid quantity
req["quantity"] = 12
req["price"] = 24
req.pop("item_name")
item_id = req.pop("item_id")
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.put(
new_url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
def test_update_item_no_shopcart_found(self):
""" Attempts updating an item but the associated shopcart id does not exist. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test updating with a valid price and a valid quantity
req["quantity"] = 12
req["price"] = 24
req.pop("item_name")
user_id = 10000
item_id = req.pop("item_id")
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.put(
new_url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
def test_update_item_no_item_found(self):
""" Attempts updating an item but the associated item id does not exist. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test updating with a valid price and a valid quantity
req["quantity"] = 12
req["price"] = 24
req.pop("item_name")
item_id = 100000001
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.put(
new_url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
######################################################################
# TEST DELETE ITEM
######################################################################
def test_delete_item(self):
""" Attempts deleting item. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test deletion
item_id = req["item_id"]
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.delete(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_204_NO_CONTENT)
self.assertEqual(resp.get_data(), b"")
def test_delete_nonexistent_itemid(self):
""" Attempts deleting item that doesn't exist."""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test deletion
item_id = req["item_id"] + 1 # this item doesn't exist
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.delete(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_204_NO_CONTENT) # still same response
self.assertEqual(resp.get_data(), b"")
def test_delete_nonexistent_userid(self):
""" Attempts deleting item whose user_id doesn't exist."""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test deletion
user_id += 1 # wrong user_id, but is valid
item_id = req["item_id"]
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.delete(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_204_NO_CONTENT) # still same response
self.assertEqual(resp.get_data(), b"")
def test_delete_nonexistent_userid_nonexistent_itemid(self):
""" Attempts deleting item whose user_id and item_id don't exist."""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test deletion
user_id += 1 # wrong user_id but still valid
item_id = req["item_id"] + 1 # wrong item_id
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.delete(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_204_NO_CONTENT) # still same response
self.assertEqual(resp.get_data(), b"")
def test_delete_item_negative_userid(self):
""" Attempts deleting item on an invalid negative user_id"""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test deletion
user_id = -1
item_id = req["item_id"]
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.delete(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
def test_delete_item_nonint_userid(self):
""" Attempts deleting item on an invalid non-int user_id"""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test deletion
user_id = 1.5
item_id = req["item_id"] + 1 # wrong item_id too but shouldn't matter
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.delete(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
def test_delete_item_nonint_item_id(self):
""" Attempts deleting item on an invalid non-int user_id"""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test deletion
user_id += 1 # wrong user_id (still valid) too but shouldn't matter
item_id = 1.5
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.delete(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
def test_delete_item_negative_itemid(self):
""" Attempts deleting item on an invalid negative item_id"""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test deletion
item_id = -1
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.delete(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
def test_delete_item_nonint_userid_negative_itemid(self):
""" Attempts deleting item on an invalid user_id and invalid item_id"""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test deletion
user_id = 1.5
item_id = -1
new_url = f"{BASE_URL}/{user_id}/items/{item_id}"
resp = self.app.delete(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND)
######################################################################
# TEST HOLD ITEM
######################################################################
def test_hold_item(self):
""" Attempts holding item for later. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test hold for later
item_id = req["item_id"]
new_url = f"{BASE_URL}/{user_id}/items/{item_id}/hold"
resp = self.app.put(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
def test_hold_nonexistent_itemid(self):
""" Attempts holding item that doesn't exist."""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test hold the non-exist item
item_id = req["item_id"] + 1 # this item doesn't exist
new_url = f"{BASE_URL}/{user_id}/items/{item_id}/hold"
resp = self.app.put(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND) # still same response
def test_hold_nonexistent_userid(self):
""" Attempts holding an item in the shopcart whose user_id doesn't exist."""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test hold when the user_id not exists
user_id += 1 # wrong user_id, but is valid
item_id = req["item_id"]
new_url = f"{BASE_URL}/{user_id}/items/{item_id}/hold"
resp = self.app.put(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND) # still same response
def test_hold_nonexistent_userid_nonexistent_itemid(self):
""" Attempts holding item whose user_id and item_id don't exist."""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test hold
user_id += 1 # wrong user_id but still valid
item_id = req["item_id"] + 1 # wrong item_id
new_url = f"{BASE_URL}/{user_id}/items/{item_id}/hold"
resp = self.app.put(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND) # still same response
######################################################################
# TEST RESUME ITEM
######################################################################
def test_resume_item(self):
""" Attempts resuming item. """
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test resume
item_id = req["item_id"]
new_url = f"{BASE_URL}/{user_id}/items/{item_id}/resume"
resp = self.app.put(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
def test_resume_nonexistent_itemid(self):
""" Attempts resuming item that doesn't exist."""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test resume
item_id = req["item_id"] + 1 # this item doesn't exist
new_url = f"{BASE_URL}/{user_id}/items/{item_id}/resume"
resp = self.app.put(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND) # still same response
def test_resume_nonexistent_userid(self):
""" Attempts resuming item whose user_id doesn't exist."""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test resume
user_id += 1 # wrong user_id, but is valid
item_id = req["item_id"]
new_url = f"{BASE_URL}/{user_id}/items/{item_id}/resume"
resp = self.app.put(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND) # still same response
def test_resume_nonexistent_userid_nonexistent_itemid(self):
""" Attempts resuming item whose user_id and item_id don't exist."""
# create an item first
test_item = ItemFactory()
req = test_item.serialize()
user_id = req.pop("user_id")
url = f"{BASE_URL}/{user_id}/items"
logging.debug(url)
resp = self.app.post(
url, json=req, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
# test resume
user_id += 1 # wrong user_id but still valid
item_id = req["item_id"] + 1 # wrong item_id
new_url = f"{BASE_URL}/{user_id}/items/{item_id}/resume"
resp = self.app.put(
new_url, content_type=CONTENT_TYPE_JSON
)
logging.debug(resp)
self.assertEqual(resp.status_code, status.HTTP_404_NOT_FOUND) # still same response
######################################################################
# TEST QUERY SHOPCARTS
######################################################################
def test_query_shopcarts_empty(self):
"""Attempts querying shopcarts returning empty list"""
shopcart1 = self._create_items(2)
shopcart2 = self._create_items(3)
item_id = 9998
new_url = f"{BASE_URL}?item-id={item_id}"
resp = self.app.get(new_url)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
data = resp.get_json()
self.assertEqual(len(data), 0)
def test_query_shopcarts_nonempty(self):
"""Attempts querying shopcarts returning list nonempty list"""
shopcart1 = self._create_items(1)
shopcart2 = self._create_items(3)
item_id = shopcart1[0].item_id
new_url = f"{BASE_URL}?item-id={item_id}"
resp = self.app.get(new_url)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
data = resp.get_json()
self.assertEqual(len(data), 2)
dict1 = {'user_id': shopcart1[0].user_id}
dict2 = {'user_id': shopcart2[0].user_id}
self.assertCountEqual(data, [dict1, dict2])
| 37.504665 | 135 | 0.594215 | 5,583 | 44,218 | 4.461222 | 0.049794 | 0.047457 | 0.070181 | 0.087325 | 0.85494 | 0.831734 | 0.808247 | 0.787208 | 0.772353 | 0.74449 | 0 | 0.013026 | 0.26211 | 44,218 | 1,178 | 136 | 37.536503 | 0.750337 | 0.12072 | 0 | 0.709112 | 0 | 0 | 0.097907 | 0.049716 | 0 | 0 | 0 | 0 | 0.151869 | 1 | 0.071262 | false | 0 | 0.010514 | 0 | 0.084112 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dbe319096e3e5cefcfb04eb0509619559a75c16c | 200 | py | Python | src/Application/App.py | jvncode/python_app_exercise | b85fffa82a3372ac899039516621524efa857e79 | [
"MIT"
] | 2 | 2021-06-24T19:15:50.000Z | 2021-12-21T17:13:14.000Z | src/Application/App.py | jvncode/python_app_exercise | b85fffa82a3372ac899039516621524efa857e79 | [
"MIT"
] | null | null | null | src/Application/App.py | jvncode/python_app_exercise | b85fffa82a3372ac899039516621524efa857e79 | [
"MIT"
] | 13 | 2021-05-05T06:56:00.000Z | 2022-03-01T16:19:25.000Z | from src.Services.ApiService import ApiService
class App:
def __init__(self):
self._api_service = ApiService()
def api_service(self) -> ApiService:
return self._api_service
| 20 | 46 | 0.7 | 24 | 200 | 5.458333 | 0.541667 | 0.229008 | 0.21374 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22 | 200 | 9 | 47 | 22.222222 | 0.839744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
dbee82ec416d65f2447158911109d937a4536d19 | 53,262 | py | Python | sdk/python/pulumi_aws/cognito/user.py | chivandikwa/pulumi-aws | 19c08bf9dcb90544450ffa4eec7bf6751058fde2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/cognito/user.py | chivandikwa/pulumi-aws | 19c08bf9dcb90544450ffa4eec7bf6751058fde2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/cognito/user.py | chivandikwa/pulumi-aws | 19c08bf9dcb90544450ffa4eec7bf6751058fde2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['UserArgs', 'User']
@pulumi.input_type
class UserArgs:
def __init__(__self__, *,
user_pool_id: pulumi.Input[str],
username: pulumi.Input[str],
attributes: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
client_metadata: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
desired_delivery_mediums: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
force_alias_creation: Optional[pulumi.Input[bool]] = None,
message_action: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
temporary_password: Optional[pulumi.Input[str]] = None,
validation_data: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a User resource.
:param pulumi.Input[str] user_pool_id: The user pool ID for the user pool where the user will be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] attributes: A map that contains user attributes and attribute values to be set for the user.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] client_metadata: A map of custom key-value pairs that you can provide as input for any custom workflows that user creation triggers. Amazon Cognito does not store the `client_metadata` value. This data is available only to Lambda triggers that are assigned to a user pool to support custom workflows. If your user pool configuration does not include triggers, the ClientMetadata parameter serves no purpose. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
:param pulumi.Input[Sequence[pulumi.Input[str]]] desired_delivery_mediums: A list of mediums to the welcome message will be sent through. Allowed values are `EMAIL` and `SMS`. If it's provided, make sure you have also specified `email` attribute for the `EMAIL` medium and `phone_number` for the `SMS`. More than one value can be specified. Amazon Cognito does not store the `desired_delivery_mediums` value. Defaults to `["SMS"]`.
:param pulumi.Input[bool] enabled: Specifies whether the user should be enabled after creation. The welcome message will be sent regardless of the `enabled` value. The behavior can be changed with `message_action` argument. Defaults to `true`.
:param pulumi.Input[bool] force_alias_creation: If this parameter is set to True and the `phone_number` or `email` address specified in the `attributes` parameter already exists as an alias with a different user, Amazon Cognito will migrate the alias from the previous user to the newly created user. The previous user will no longer be able to log in using that alias. Amazon Cognito does not store the `force_alias_creation` value. Defaults to `false`.
:param pulumi.Input[str] message_action: Set to `RESEND` to resend the invitation message to a user that already exists and reset the expiration limit on the user's account. Set to `SUPPRESS` to suppress sending the message. Only one value can be specified. Amazon Cognito does not store the `message_action` value.
:param pulumi.Input[str] password: The user's permanent password. This password must conform to the password policy specified by user pool the user belongs to. The welcome message always contains only `temporary_password` value. You can suppress sending the welcome message with the `message_action` argument. Amazon Cognito does not store the `password` value. Conflicts with `temporary_password`.
:param pulumi.Input[str] temporary_password: The user's temporary password. Conflicts with `password`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] validation_data: The user's validation data. This is an array of name-value pairs that contain user attributes and attribute values that you can use for custom validation, such as restricting the types of user accounts that can be registered. Amazon Cognito does not store the `validation_data` value. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
"""
pulumi.set(__self__, "user_pool_id", user_pool_id)
pulumi.set(__self__, "username", username)
if attributes is not None:
pulumi.set(__self__, "attributes", attributes)
if client_metadata is not None:
pulumi.set(__self__, "client_metadata", client_metadata)
if desired_delivery_mediums is not None:
pulumi.set(__self__, "desired_delivery_mediums", desired_delivery_mediums)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if force_alias_creation is not None:
pulumi.set(__self__, "force_alias_creation", force_alias_creation)
if message_action is not None:
pulumi.set(__self__, "message_action", message_action)
if password is not None:
pulumi.set(__self__, "password", password)
if temporary_password is not None:
pulumi.set(__self__, "temporary_password", temporary_password)
if validation_data is not None:
pulumi.set(__self__, "validation_data", validation_data)
@property
@pulumi.getter(name="userPoolId")
def user_pool_id(self) -> pulumi.Input[str]:
"""
The user pool ID for the user pool where the user will be created.
"""
return pulumi.get(self, "user_pool_id")
@user_pool_id.setter
def user_pool_id(self, value: pulumi.Input[str]):
pulumi.set(self, "user_pool_id", value)
@property
@pulumi.getter
def username(self) -> pulumi.Input[str]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: pulumi.Input[str]):
pulumi.set(self, "username", value)
@property
@pulumi.getter
def attributes(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map that contains user attributes and attribute values to be set for the user.
"""
return pulumi.get(self, "attributes")
@attributes.setter
def attributes(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "attributes", value)
@property
@pulumi.getter(name="clientMetadata")
def client_metadata(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of custom key-value pairs that you can provide as input for any custom workflows that user creation triggers. Amazon Cognito does not store the `client_metadata` value. This data is available only to Lambda triggers that are assigned to a user pool to support custom workflows. If your user pool configuration does not include triggers, the ClientMetadata parameter serves no purpose. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
"""
return pulumi.get(self, "client_metadata")
@client_metadata.setter
def client_metadata(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "client_metadata", value)
@property
@pulumi.getter(name="desiredDeliveryMediums")
def desired_delivery_mediums(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of mediums to the welcome message will be sent through. Allowed values are `EMAIL` and `SMS`. If it's provided, make sure you have also specified `email` attribute for the `EMAIL` medium and `phone_number` for the `SMS`. More than one value can be specified. Amazon Cognito does not store the `desired_delivery_mediums` value. Defaults to `["SMS"]`.
"""
return pulumi.get(self, "desired_delivery_mediums")
@desired_delivery_mediums.setter
def desired_delivery_mediums(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "desired_delivery_mediums", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies whether the user should be enabled after creation. The welcome message will be sent regardless of the `enabled` value. The behavior can be changed with `message_action` argument. Defaults to `true`.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter(name="forceAliasCreation")
def force_alias_creation(self) -> Optional[pulumi.Input[bool]]:
"""
If this parameter is set to True and the `phone_number` or `email` address specified in the `attributes` parameter already exists as an alias with a different user, Amazon Cognito will migrate the alias from the previous user to the newly created user. The previous user will no longer be able to log in using that alias. Amazon Cognito does not store the `force_alias_creation` value. Defaults to `false`.
"""
return pulumi.get(self, "force_alias_creation")
@force_alias_creation.setter
def force_alias_creation(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_alias_creation", value)
@property
@pulumi.getter(name="messageAction")
def message_action(self) -> Optional[pulumi.Input[str]]:
"""
Set to `RESEND` to resend the invitation message to a user that already exists and reset the expiration limit on the user's account. Set to `SUPPRESS` to suppress sending the message. Only one value can be specified. Amazon Cognito does not store the `message_action` value.
"""
return pulumi.get(self, "message_action")
@message_action.setter
def message_action(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "message_action", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
"""
The user's permanent password. This password must conform to the password policy specified by user pool the user belongs to. The welcome message always contains only `temporary_password` value. You can suppress sending the welcome message with the `message_action` argument. Amazon Cognito does not store the `password` value. Conflicts with `temporary_password`.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="temporaryPassword")
def temporary_password(self) -> Optional[pulumi.Input[str]]:
"""
The user's temporary password. Conflicts with `password`.
"""
return pulumi.get(self, "temporary_password")
@temporary_password.setter
def temporary_password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "temporary_password", value)
@property
@pulumi.getter(name="validationData")
def validation_data(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
The user's validation data. This is an array of name-value pairs that contain user attributes and attribute values that you can use for custom validation, such as restricting the types of user accounts that can be registered. Amazon Cognito does not store the `validation_data` value. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
"""
return pulumi.get(self, "validation_data")
@validation_data.setter
def validation_data(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "validation_data", value)
@pulumi.input_type
class _UserState:
def __init__(__self__, *,
attributes: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
client_metadata: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
creation_date: Optional[pulumi.Input[str]] = None,
desired_delivery_mediums: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
force_alias_creation: Optional[pulumi.Input[bool]] = None,
last_modified_date: Optional[pulumi.Input[str]] = None,
message_action: Optional[pulumi.Input[str]] = None,
mfa_setting_lists: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
password: Optional[pulumi.Input[str]] = None,
preferred_mfa_setting: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None,
sub: Optional[pulumi.Input[str]] = None,
temporary_password: Optional[pulumi.Input[str]] = None,
user_pool_id: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None,
validation_data: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering User resources.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] attributes: A map that contains user attributes and attribute values to be set for the user.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] client_metadata: A map of custom key-value pairs that you can provide as input for any custom workflows that user creation triggers. Amazon Cognito does not store the `client_metadata` value. This data is available only to Lambda triggers that are assigned to a user pool to support custom workflows. If your user pool configuration does not include triggers, the ClientMetadata parameter serves no purpose. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
:param pulumi.Input[Sequence[pulumi.Input[str]]] desired_delivery_mediums: A list of mediums to the welcome message will be sent through. Allowed values are `EMAIL` and `SMS`. If it's provided, make sure you have also specified `email` attribute for the `EMAIL` medium and `phone_number` for the `SMS`. More than one value can be specified. Amazon Cognito does not store the `desired_delivery_mediums` value. Defaults to `["SMS"]`.
:param pulumi.Input[bool] enabled: Specifies whether the user should be enabled after creation. The welcome message will be sent regardless of the `enabled` value. The behavior can be changed with `message_action` argument. Defaults to `true`.
:param pulumi.Input[bool] force_alias_creation: If this parameter is set to True and the `phone_number` or `email` address specified in the `attributes` parameter already exists as an alias with a different user, Amazon Cognito will migrate the alias from the previous user to the newly created user. The previous user will no longer be able to log in using that alias. Amazon Cognito does not store the `force_alias_creation` value. Defaults to `false`.
:param pulumi.Input[str] message_action: Set to `RESEND` to resend the invitation message to a user that already exists and reset the expiration limit on the user's account. Set to `SUPPRESS` to suppress sending the message. Only one value can be specified. Amazon Cognito does not store the `message_action` value.
:param pulumi.Input[str] password: The user's permanent password. This password must conform to the password policy specified by user pool the user belongs to. The welcome message always contains only `temporary_password` value. You can suppress sending the welcome message with the `message_action` argument. Amazon Cognito does not store the `password` value. Conflicts with `temporary_password`.
:param pulumi.Input[str] status: current user status.
:param pulumi.Input[str] sub: unique user id that is never reassignable to another user.
:param pulumi.Input[str] temporary_password: The user's temporary password. Conflicts with `password`.
:param pulumi.Input[str] user_pool_id: The user pool ID for the user pool where the user will be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] validation_data: The user's validation data. This is an array of name-value pairs that contain user attributes and attribute values that you can use for custom validation, such as restricting the types of user accounts that can be registered. Amazon Cognito does not store the `validation_data` value. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
"""
if attributes is not None:
pulumi.set(__self__, "attributes", attributes)
if client_metadata is not None:
pulumi.set(__self__, "client_metadata", client_metadata)
if creation_date is not None:
pulumi.set(__self__, "creation_date", creation_date)
if desired_delivery_mediums is not None:
pulumi.set(__self__, "desired_delivery_mediums", desired_delivery_mediums)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if force_alias_creation is not None:
pulumi.set(__self__, "force_alias_creation", force_alias_creation)
if last_modified_date is not None:
pulumi.set(__self__, "last_modified_date", last_modified_date)
if message_action is not None:
pulumi.set(__self__, "message_action", message_action)
if mfa_setting_lists is not None:
pulumi.set(__self__, "mfa_setting_lists", mfa_setting_lists)
if password is not None:
pulumi.set(__self__, "password", password)
if preferred_mfa_setting is not None:
pulumi.set(__self__, "preferred_mfa_setting", preferred_mfa_setting)
if status is not None:
pulumi.set(__self__, "status", status)
if sub is not None:
pulumi.set(__self__, "sub", sub)
if temporary_password is not None:
pulumi.set(__self__, "temporary_password", temporary_password)
if user_pool_id is not None:
pulumi.set(__self__, "user_pool_id", user_pool_id)
if username is not None:
pulumi.set(__self__, "username", username)
if validation_data is not None:
pulumi.set(__self__, "validation_data", validation_data)
@property
@pulumi.getter
def attributes(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map that contains user attributes and attribute values to be set for the user.
"""
return pulumi.get(self, "attributes")
@attributes.setter
def attributes(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "attributes", value)
@property
@pulumi.getter(name="clientMetadata")
def client_metadata(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of custom key-value pairs that you can provide as input for any custom workflows that user creation triggers. Amazon Cognito does not store the `client_metadata` value. This data is available only to Lambda triggers that are assigned to a user pool to support custom workflows. If your user pool configuration does not include triggers, the ClientMetadata parameter serves no purpose. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
"""
return pulumi.get(self, "client_metadata")
@client_metadata.setter
def client_metadata(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "client_metadata", value)
@property
@pulumi.getter(name="creationDate")
def creation_date(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "creation_date")
@creation_date.setter
def creation_date(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "creation_date", value)
@property
@pulumi.getter(name="desiredDeliveryMediums")
def desired_delivery_mediums(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of mediums to the welcome message will be sent through. Allowed values are `EMAIL` and `SMS`. If it's provided, make sure you have also specified `email` attribute for the `EMAIL` medium and `phone_number` for the `SMS`. More than one value can be specified. Amazon Cognito does not store the `desired_delivery_mediums` value. Defaults to `["SMS"]`.
"""
return pulumi.get(self, "desired_delivery_mediums")
@desired_delivery_mediums.setter
def desired_delivery_mediums(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "desired_delivery_mediums", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies whether the user should be enabled after creation. The welcome message will be sent regardless of the `enabled` value. The behavior can be changed with `message_action` argument. Defaults to `true`.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter(name="forceAliasCreation")
def force_alias_creation(self) -> Optional[pulumi.Input[bool]]:
"""
If this parameter is set to True and the `phone_number` or `email` address specified in the `attributes` parameter already exists as an alias with a different user, Amazon Cognito will migrate the alias from the previous user to the newly created user. The previous user will no longer be able to log in using that alias. Amazon Cognito does not store the `force_alias_creation` value. Defaults to `false`.
"""
return pulumi.get(self, "force_alias_creation")
@force_alias_creation.setter
def force_alias_creation(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_alias_creation", value)
@property
@pulumi.getter(name="lastModifiedDate")
def last_modified_date(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "last_modified_date")
@last_modified_date.setter
def last_modified_date(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "last_modified_date", value)
@property
@pulumi.getter(name="messageAction")
def message_action(self) -> Optional[pulumi.Input[str]]:
"""
Set to `RESEND` to resend the invitation message to a user that already exists and reset the expiration limit on the user's account. Set to `SUPPRESS` to suppress sending the message. Only one value can be specified. Amazon Cognito does not store the `message_action` value.
"""
return pulumi.get(self, "message_action")
@message_action.setter
def message_action(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "message_action", value)
@property
@pulumi.getter(name="mfaSettingLists")
def mfa_setting_lists(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "mfa_setting_lists")
@mfa_setting_lists.setter
def mfa_setting_lists(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "mfa_setting_lists", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
"""
The user's permanent password. This password must conform to the password policy specified by user pool the user belongs to. The welcome message always contains only `temporary_password` value. You can suppress sending the welcome message with the `message_action` argument. Amazon Cognito does not store the `password` value. Conflicts with `temporary_password`.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="preferredMfaSetting")
def preferred_mfa_setting(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "preferred_mfa_setting")
@preferred_mfa_setting.setter
def preferred_mfa_setting(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "preferred_mfa_setting", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
current user status.
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def sub(self) -> Optional[pulumi.Input[str]]:
"""
unique user id that is never reassignable to another user.
"""
return pulumi.get(self, "sub")
@sub.setter
def sub(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "sub", value)
@property
@pulumi.getter(name="temporaryPassword")
def temporary_password(self) -> Optional[pulumi.Input[str]]:
"""
The user's temporary password. Conflicts with `password`.
"""
return pulumi.get(self, "temporary_password")
@temporary_password.setter
def temporary_password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "temporary_password", value)
@property
@pulumi.getter(name="userPoolId")
def user_pool_id(self) -> Optional[pulumi.Input[str]]:
"""
The user pool ID for the user pool where the user will be created.
"""
return pulumi.get(self, "user_pool_id")
@user_pool_id.setter
def user_pool_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "user_pool_id", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@property
@pulumi.getter(name="validationData")
def validation_data(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
The user's validation data. This is an array of name-value pairs that contain user attributes and attribute values that you can use for custom validation, such as restricting the types of user accounts that can be registered. Amazon Cognito does not store the `validation_data` value. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
"""
return pulumi.get(self, "validation_data")
@validation_data.setter
def validation_data(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "validation_data", value)
class User(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
attributes: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
client_metadata: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
desired_delivery_mediums: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
force_alias_creation: Optional[pulumi.Input[bool]] = None,
message_action: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
temporary_password: Optional[pulumi.Input[str]] = None,
user_pool_id: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None,
validation_data: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
"""
Provides a Cognito User Resource.
## Example Usage
### Basic configuration
```python
import pulumi
import pulumi_aws as aws
example_user_pool = aws.cognito.UserPool("exampleUserPool")
example_user = aws.cognito.User("exampleUser",
user_pool_id=example_user_pool.id,
username="example")
```
### Setting user attributes
```python
import pulumi
import pulumi_aws as aws
example_user_pool = aws.cognito.UserPool("exampleUserPool", schemas=[
aws.cognito.UserPoolSchemaArgs(
name="terraform",
attribute_data_type="Boolean",
mutable=False,
required=False,
developer_only_attribute=False,
),
aws.cognito.UserPoolSchemaArgs(
name="foo",
attribute_data_type="String",
mutable=False,
required=False,
developer_only_attribute=False,
string_attribute_constraints=aws.cognito.UserPoolSchemaStringAttributeConstraintsArgs(),
),
])
example_user = aws.cognito.User("exampleUser",
user_pool_id=example_user_pool.id,
username="example",
attributes={
"terraform": "true",
"foo": "bar",
"email": "no-reply@hashicorp.com",
"email_verified": "true",
})
```
## Import
Cognito User can be imported using the `user_pool_id`/`name` attributes concatenated, e.g.,
```sh
$ pulumi import aws:cognito/user:User user us-east-1_vG78M4goG/user
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] attributes: A map that contains user attributes and attribute values to be set for the user.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] client_metadata: A map of custom key-value pairs that you can provide as input for any custom workflows that user creation triggers. Amazon Cognito does not store the `client_metadata` value. This data is available only to Lambda triggers that are assigned to a user pool to support custom workflows. If your user pool configuration does not include triggers, the ClientMetadata parameter serves no purpose. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
:param pulumi.Input[Sequence[pulumi.Input[str]]] desired_delivery_mediums: A list of mediums to the welcome message will be sent through. Allowed values are `EMAIL` and `SMS`. If it's provided, make sure you have also specified `email` attribute for the `EMAIL` medium and `phone_number` for the `SMS`. More than one value can be specified. Amazon Cognito does not store the `desired_delivery_mediums` value. Defaults to `["SMS"]`.
:param pulumi.Input[bool] enabled: Specifies whether the user should be enabled after creation. The welcome message will be sent regardless of the `enabled` value. The behavior can be changed with `message_action` argument. Defaults to `true`.
:param pulumi.Input[bool] force_alias_creation: If this parameter is set to True and the `phone_number` or `email` address specified in the `attributes` parameter already exists as an alias with a different user, Amazon Cognito will migrate the alias from the previous user to the newly created user. The previous user will no longer be able to log in using that alias. Amazon Cognito does not store the `force_alias_creation` value. Defaults to `false`.
:param pulumi.Input[str] message_action: Set to `RESEND` to resend the invitation message to a user that already exists and reset the expiration limit on the user's account. Set to `SUPPRESS` to suppress sending the message. Only one value can be specified. Amazon Cognito does not store the `message_action` value.
:param pulumi.Input[str] password: The user's permanent password. This password must conform to the password policy specified by user pool the user belongs to. The welcome message always contains only `temporary_password` value. You can suppress sending the welcome message with the `message_action` argument. Amazon Cognito does not store the `password` value. Conflicts with `temporary_password`.
:param pulumi.Input[str] temporary_password: The user's temporary password. Conflicts with `password`.
:param pulumi.Input[str] user_pool_id: The user pool ID for the user pool where the user will be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] validation_data: The user's validation data. This is an array of name-value pairs that contain user attributes and attribute values that you can use for custom validation, such as restricting the types of user accounts that can be registered. Amazon Cognito does not store the `validation_data` value. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: UserArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a Cognito User Resource.
## Example Usage
### Basic configuration
```python
import pulumi
import pulumi_aws as aws
example_user_pool = aws.cognito.UserPool("exampleUserPool")
example_user = aws.cognito.User("exampleUser",
user_pool_id=example_user_pool.id,
username="example")
```
### Setting user attributes
```python
import pulumi
import pulumi_aws as aws
example_user_pool = aws.cognito.UserPool("exampleUserPool", schemas=[
aws.cognito.UserPoolSchemaArgs(
name="terraform",
attribute_data_type="Boolean",
mutable=False,
required=False,
developer_only_attribute=False,
),
aws.cognito.UserPoolSchemaArgs(
name="foo",
attribute_data_type="String",
mutable=False,
required=False,
developer_only_attribute=False,
string_attribute_constraints=aws.cognito.UserPoolSchemaStringAttributeConstraintsArgs(),
),
])
example_user = aws.cognito.User("exampleUser",
user_pool_id=example_user_pool.id,
username="example",
attributes={
"terraform": "true",
"foo": "bar",
"email": "no-reply@hashicorp.com",
"email_verified": "true",
})
```
## Import
Cognito User can be imported using the `user_pool_id`/`name` attributes concatenated, e.g.,
```sh
$ pulumi import aws:cognito/user:User user us-east-1_vG78M4goG/user
```
:param str resource_name: The name of the resource.
:param UserArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(UserArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
attributes: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
client_metadata: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
desired_delivery_mediums: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
force_alias_creation: Optional[pulumi.Input[bool]] = None,
message_action: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
temporary_password: Optional[pulumi.Input[str]] = None,
user_pool_id: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None,
validation_data: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = UserArgs.__new__(UserArgs)
__props__.__dict__["attributes"] = attributes
__props__.__dict__["client_metadata"] = client_metadata
__props__.__dict__["desired_delivery_mediums"] = desired_delivery_mediums
__props__.__dict__["enabled"] = enabled
__props__.__dict__["force_alias_creation"] = force_alias_creation
__props__.__dict__["message_action"] = message_action
__props__.__dict__["password"] = password
__props__.__dict__["temporary_password"] = temporary_password
if user_pool_id is None and not opts.urn:
raise TypeError("Missing required property 'user_pool_id'")
__props__.__dict__["user_pool_id"] = user_pool_id
if username is None and not opts.urn:
raise TypeError("Missing required property 'username'")
__props__.__dict__["username"] = username
__props__.__dict__["validation_data"] = validation_data
__props__.__dict__["creation_date"] = None
__props__.__dict__["last_modified_date"] = None
__props__.__dict__["mfa_setting_lists"] = None
__props__.__dict__["preferred_mfa_setting"] = None
__props__.__dict__["status"] = None
__props__.__dict__["sub"] = None
super(User, __self__).__init__(
'aws:cognito/user:User',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
attributes: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
client_metadata: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
creation_date: Optional[pulumi.Input[str]] = None,
desired_delivery_mediums: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
force_alias_creation: Optional[pulumi.Input[bool]] = None,
last_modified_date: Optional[pulumi.Input[str]] = None,
message_action: Optional[pulumi.Input[str]] = None,
mfa_setting_lists: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
password: Optional[pulumi.Input[str]] = None,
preferred_mfa_setting: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None,
sub: Optional[pulumi.Input[str]] = None,
temporary_password: Optional[pulumi.Input[str]] = None,
user_pool_id: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None,
validation_data: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None) -> 'User':
"""
Get an existing User resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] attributes: A map that contains user attributes and attribute values to be set for the user.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] client_metadata: A map of custom key-value pairs that you can provide as input for any custom workflows that user creation triggers. Amazon Cognito does not store the `client_metadata` value. This data is available only to Lambda triggers that are assigned to a user pool to support custom workflows. If your user pool configuration does not include triggers, the ClientMetadata parameter serves no purpose. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
:param pulumi.Input[Sequence[pulumi.Input[str]]] desired_delivery_mediums: A list of mediums to the welcome message will be sent through. Allowed values are `EMAIL` and `SMS`. If it's provided, make sure you have also specified `email` attribute for the `EMAIL` medium and `phone_number` for the `SMS`. More than one value can be specified. Amazon Cognito does not store the `desired_delivery_mediums` value. Defaults to `["SMS"]`.
:param pulumi.Input[bool] enabled: Specifies whether the user should be enabled after creation. The welcome message will be sent regardless of the `enabled` value. The behavior can be changed with `message_action` argument. Defaults to `true`.
:param pulumi.Input[bool] force_alias_creation: If this parameter is set to True and the `phone_number` or `email` address specified in the `attributes` parameter already exists as an alias with a different user, Amazon Cognito will migrate the alias from the previous user to the newly created user. The previous user will no longer be able to log in using that alias. Amazon Cognito does not store the `force_alias_creation` value. Defaults to `false`.
:param pulumi.Input[str] message_action: Set to `RESEND` to resend the invitation message to a user that already exists and reset the expiration limit on the user's account. Set to `SUPPRESS` to suppress sending the message. Only one value can be specified. Amazon Cognito does not store the `message_action` value.
:param pulumi.Input[str] password: The user's permanent password. This password must conform to the password policy specified by user pool the user belongs to. The welcome message always contains only `temporary_password` value. You can suppress sending the welcome message with the `message_action` argument. Amazon Cognito does not store the `password` value. Conflicts with `temporary_password`.
:param pulumi.Input[str] status: current user status.
:param pulumi.Input[str] sub: unique user id that is never reassignable to another user.
:param pulumi.Input[str] temporary_password: The user's temporary password. Conflicts with `password`.
:param pulumi.Input[str] user_pool_id: The user pool ID for the user pool where the user will be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] validation_data: The user's validation data. This is an array of name-value pairs that contain user attributes and attribute values that you can use for custom validation, such as restricting the types of user accounts that can be registered. Amazon Cognito does not store the `validation_data` value. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _UserState.__new__(_UserState)
__props__.__dict__["attributes"] = attributes
__props__.__dict__["client_metadata"] = client_metadata
__props__.__dict__["creation_date"] = creation_date
__props__.__dict__["desired_delivery_mediums"] = desired_delivery_mediums
__props__.__dict__["enabled"] = enabled
__props__.__dict__["force_alias_creation"] = force_alias_creation
__props__.__dict__["last_modified_date"] = last_modified_date
__props__.__dict__["message_action"] = message_action
__props__.__dict__["mfa_setting_lists"] = mfa_setting_lists
__props__.__dict__["password"] = password
__props__.__dict__["preferred_mfa_setting"] = preferred_mfa_setting
__props__.__dict__["status"] = status
__props__.__dict__["sub"] = sub
__props__.__dict__["temporary_password"] = temporary_password
__props__.__dict__["user_pool_id"] = user_pool_id
__props__.__dict__["username"] = username
__props__.__dict__["validation_data"] = validation_data
return User(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def attributes(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A map that contains user attributes and attribute values to be set for the user.
"""
return pulumi.get(self, "attributes")
@property
@pulumi.getter(name="clientMetadata")
def client_metadata(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A map of custom key-value pairs that you can provide as input for any custom workflows that user creation triggers. Amazon Cognito does not store the `client_metadata` value. This data is available only to Lambda triggers that are assigned to a user pool to support custom workflows. If your user pool configuration does not include triggers, the ClientMetadata parameter serves no purpose. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
"""
return pulumi.get(self, "client_metadata")
@property
@pulumi.getter(name="creationDate")
def creation_date(self) -> pulumi.Output[str]:
return pulumi.get(self, "creation_date")
@property
@pulumi.getter(name="desiredDeliveryMediums")
def desired_delivery_mediums(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
A list of mediums to the welcome message will be sent through. Allowed values are `EMAIL` and `SMS`. If it's provided, make sure you have also specified `email` attribute for the `EMAIL` medium and `phone_number` for the `SMS`. More than one value can be specified. Amazon Cognito does not store the `desired_delivery_mediums` value. Defaults to `["SMS"]`.
"""
return pulumi.get(self, "desired_delivery_mediums")
@property
@pulumi.getter
def enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Specifies whether the user should be enabled after creation. The welcome message will be sent regardless of the `enabled` value. The behavior can be changed with `message_action` argument. Defaults to `true`.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter(name="forceAliasCreation")
def force_alias_creation(self) -> pulumi.Output[Optional[bool]]:
"""
If this parameter is set to True and the `phone_number` or `email` address specified in the `attributes` parameter already exists as an alias with a different user, Amazon Cognito will migrate the alias from the previous user to the newly created user. The previous user will no longer be able to log in using that alias. Amazon Cognito does not store the `force_alias_creation` value. Defaults to `false`.
"""
return pulumi.get(self, "force_alias_creation")
@property
@pulumi.getter(name="lastModifiedDate")
def last_modified_date(self) -> pulumi.Output[str]:
return pulumi.get(self, "last_modified_date")
@property
@pulumi.getter(name="messageAction")
def message_action(self) -> pulumi.Output[Optional[str]]:
"""
Set to `RESEND` to resend the invitation message to a user that already exists and reset the expiration limit on the user's account. Set to `SUPPRESS` to suppress sending the message. Only one value can be specified. Amazon Cognito does not store the `message_action` value.
"""
return pulumi.get(self, "message_action")
@property
@pulumi.getter(name="mfaSettingLists")
def mfa_setting_lists(self) -> pulumi.Output[Sequence[str]]:
return pulumi.get(self, "mfa_setting_lists")
@property
@pulumi.getter
def password(self) -> pulumi.Output[Optional[str]]:
"""
The user's permanent password. This password must conform to the password policy specified by user pool the user belongs to. The welcome message always contains only `temporary_password` value. You can suppress sending the welcome message with the `message_action` argument. Amazon Cognito does not store the `password` value. Conflicts with `temporary_password`.
"""
return pulumi.get(self, "password")
@property
@pulumi.getter(name="preferredMfaSetting")
def preferred_mfa_setting(self) -> pulumi.Output[str]:
return pulumi.get(self, "preferred_mfa_setting")
@property
@pulumi.getter
def status(self) -> pulumi.Output[str]:
"""
current user status.
"""
return pulumi.get(self, "status")
@property
@pulumi.getter
def sub(self) -> pulumi.Output[str]:
"""
unique user id that is never reassignable to another user.
"""
return pulumi.get(self, "sub")
@property
@pulumi.getter(name="temporaryPassword")
def temporary_password(self) -> pulumi.Output[Optional[str]]:
"""
The user's temporary password. Conflicts with `password`.
"""
return pulumi.get(self, "temporary_password")
@property
@pulumi.getter(name="userPoolId")
def user_pool_id(self) -> pulumi.Output[str]:
"""
The user pool ID for the user pool where the user will be created.
"""
return pulumi.get(self, "user_pool_id")
@property
@pulumi.getter
def username(self) -> pulumi.Output[str]:
return pulumi.get(self, "username")
@property
@pulumi.getter(name="validationData")
def validation_data(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
The user's validation data. This is an array of name-value pairs that contain user attributes and attribute values that you can use for custom validation, such as restricting the types of user accounts that can be registered. Amazon Cognito does not store the `validation_data` value. For more information, see [Customizing User Pool Workflows with Lambda Triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html).
"""
return pulumi.get(self, "validation_data")
| 62.587544 | 675 | 0.695562 | 6,871 | 53,262 | 5.232135 | 0.044389 | 0.069458 | 0.055688 | 0.036106 | 0.948679 | 0.9363 | 0.91872 | 0.908095 | 0.896356 | 0.869791 | 0 | 0.000214 | 0.211689 | 53,262 | 850 | 676 | 62.661176 | 0.856003 | 0.499512 | 0 | 0.747917 | 1 | 0 | 0.103472 | 0.017442 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.104167 | 0.010417 | 0.022917 | 0.279167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
e027f32d604cbf155d99177b5c9b1bde5faa3f31 | 180 | py | Python | python/testData/formatter/alternativesAlignmentInOrPatternsInsideSequenceLikePattern_after.py | 06needhamt/intellij-community | 63d7b8030e4fdefeb4760e511e289f7e6b3a5c5b | [
"Apache-2.0"
] | null | null | null | python/testData/formatter/alternativesAlignmentInOrPatternsInsideSequenceLikePattern_after.py | 06needhamt/intellij-community | 63d7b8030e4fdefeb4760e511e289f7e6b3a5c5b | [
"Apache-2.0"
] | null | null | null | python/testData/formatter/alternativesAlignmentInOrPatternsInsideSequenceLikePattern_after.py | 06needhamt/intellij-community | 63d7b8030e4fdefeb4760e511e289f7e6b3a5c5b | [
"Apache-2.0"
] | null | null | null | match x:
case (1 |
2
| 3,
[1
| 2,
3, ],
Class(1
| 2,
3)
):
pass
| 13.846154 | 20 | 0.155556 | 14 | 180 | 2 | 0.571429 | 0.214286 | 0.321429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0.75 | 180 | 12 | 21 | 15 | 0.422222 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.083333 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
e03d7348b4cbfadb6cfc946a0a034e45dec6a876 | 295 | py | Python | gerapy_auto_extractor/patterns/title.py | Insutanto/GerapyAutoExtractor | 32fee4a94bd40b9f7ee2ce45b127520c255ae966 | [
"Apache-2.0"
] | 214 | 2020-06-27T23:23:04.000Z | 2022-03-28T04:34:40.000Z | Engine/gerapy_auto_extractor/patterns/title.py | Justin3go/xiu-search | 3f42d946c98e312fa2eb3397824758465b959518 | [
"MulanPSL-1.0"
] | 16 | 2020-07-07T06:47:16.000Z | 2022-01-19T05:27:55.000Z | Engine/gerapy_auto_extractor/patterns/title.py | Justin3go/xiu-search | 3f42d946c98e312fa2eb3397824758465b959518 | [
"MulanPSL-1.0"
] | 60 | 2020-06-29T07:37:55.000Z | 2022-02-25T03:07:16.000Z | METAS = [
'//meta[starts-with(@property, "og:title")]/@content',
'//meta[starts-with(@name, "og:title")]/@content',
'//meta[starts-with(@property, "title")]/@content',
'//meta[starts-with(@name, "title")]/@content',
'//meta[starts-with(@property, "page:title")]/@content',
]
| 36.875 | 60 | 0.59322 | 34 | 295 | 5.147059 | 0.294118 | 0.285714 | 0.4 | 0.502857 | 0.754286 | 0.754286 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115254 | 295 | 7 | 61 | 42.142857 | 0.670498 | 0 | 0 | 0 | 0 | 0 | 0.823729 | 0.684746 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e0413f418b374d52bbed4092164ccb2d2bf5d943 | 78 | py | Python | tests/test_devices_water_pump.py | nielse63/PiPlanter | 94ed5265fd4d9b4183edd4a67047d976ee5cdd72 | [
"MIT"
] | null | null | null | tests/test_devices_water_pump.py | nielse63/PiPlanter | 94ed5265fd4d9b4183edd4a67047d976ee5cdd72 | [
"MIT"
] | 118 | 2021-03-08T11:04:41.000Z | 2022-03-31T11:07:05.000Z | tests/test_devices_water_pump.py | nielse63/PiPlanter | 94ed5265fd4d9b4183edd4a67047d976ee5cdd72 | [
"MIT"
] | null | null | null | import pyplanter.devices.water_pump
def test_devices_water_pump():
pass
| 13 | 35 | 0.794872 | 11 | 78 | 5.272727 | 0.727273 | 0.413793 | 0.551724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141026 | 78 | 5 | 36 | 15.6 | 0.865672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 9 |
e06a54d9e84d024e851db6d937115da67fd8c155 | 84,269 | py | Python | reo/models.py | NREL/REopt_Lite_API | 7cd3afeb135ef89cb65dde1b305e53fe321b2488 | [
"BSD-3-Clause"
] | 41 | 2020-02-21T08:25:17.000Z | 2022-01-14T23:06:42.000Z | reo/models.py | NREL/REopt_Lite_API | 7cd3afeb135ef89cb65dde1b305e53fe321b2488 | [
"BSD-3-Clause"
] | 167 | 2020-02-17T17:26:47.000Z | 2022-01-20T20:36:54.000Z | reo/models.py | NREL/REopt_Lite_API | 7cd3afeb135ef89cb65dde1b305e53fe321b2488 | [
"BSD-3-Clause"
] | 31 | 2020-02-20T00:22:51.000Z | 2021-12-10T05:48:08.000Z | # *********************************************************************************
# REopt, Copyright (c) 2019-2020, Alliance for Sustainable Energy, LLC.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# Redistributions of source code must retain the above copyright notice, this list
# of conditions and the following disclaimer.
#
# Redistributions in binary form must reproduce the above copyright notice, this
# list of conditions and the following disclaimer in the documentation and/or other
# materials provided with the distribution.
#
# Neither the name of the copyright holder nor the names of its contributors may be
# used to endorse or promote products derived from this software without specific
# prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
# INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
# OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
# OF THE POSSIBILITY OF SUCH DAMAGE.
# *********************************************************************************
# from django.contrib.auth.models import User
from django.db import models
from django.contrib.postgres.fields import *
from django.forms.models import model_to_dict
from picklefield.fields import PickledObjectField
from reo.nested_inputs import nested_input_definitions
import logging
log = logging.getLogger(__name__)
import sys
import traceback as tb
import warnings
class URDBError(models.Model):
label = models.TextField(null=True, blank=True, default='')
type = models.TextField(null=True, blank=True, default='')
message = models.TextField(null=True, blank=True, default='')
def save_to_db(self):
try:
self.save()
except Exception as e:
exc_type, exc_value, exc_traceback = sys.exc_info()
message = 'Could not save URDBError {} for label {} error to the database - {} \n\n{}'.format(self.type,
self.label,
self.message,
tb.format_tb(
exc_traceback))
warnings.warn(message)
log.debug(message)
class ProfileModel(models.Model):
run_uuid = models.UUIDField(unique=True)
pre_setup_scenario_seconds = models.FloatField(null=True, blank=True)
setup_scenario_seconds = models.FloatField(null=True, blank=True)
reopt_seconds = models.FloatField(null=True, blank=True)
reopt_bau_seconds = models.FloatField(null=True, blank=True)
parse_run_outputs_seconds = models.FloatField(null=True, blank=True)
julia_input_construction_seconds = models.FloatField(null=True, blank=True)
julia_reopt_preamble_seconds = models.FloatField(null=True, blank=True)
julia_reopt_variables_seconds = models.FloatField(null=True, blank=True)
julia_reopt_constriants_seconds = models.FloatField(null=True, blank=True)
julia_reopt_optimize_seconds = models.FloatField(null=True, blank=True)
julia_reopt_postprocess_seconds = models.FloatField(null=True, blank=True)
pyjulia_start_seconds = models.FloatField(null=True, blank=True)
pyjulia_pkg_seconds = models.FloatField(null=True, blank=True)
pyjulia_activate_seconds = models.FloatField(null=True, blank=True)
pyjulia_include_model_seconds = models.FloatField(null=True, blank=True)
pyjulia_make_model_seconds = models.FloatField(null=True, blank=True)
pyjulia_include_reopt_seconds = models.FloatField(null=True, blank=True)
pyjulia_run_reopt_seconds = models.FloatField(null=True, blank=True)
julia_input_construction_seconds_bau = models.FloatField(null=True, blank=True)
julia_reopt_preamble_seconds_bau = models.FloatField(null=True, blank=True)
julia_reopt_variables_seconds_bau = models.FloatField(null=True, blank=True)
julia_reopt_constriants_seconds_bau = models.FloatField(null=True, blank=True)
julia_reopt_optimize_seconds_bau = models.FloatField(null=True, blank=True)
julia_reopt_postprocess_seconds_bau = models.FloatField(null=True, blank=True)
pyjulia_start_seconds_bau = models.FloatField(null=True, blank=True)
pyjulia_pkg_seconds_bau = models.FloatField(null=True, blank=True)
pyjulia_activate_seconds_bau = models.FloatField(null=True, blank=True)
pyjulia_include_model_seconds_bau = models.FloatField(null=True, blank=True)
pyjulia_make_model_seconds_bau = models.FloatField(null=True, blank=True)
pyjulia_include_reopt_seconds_bau = models.FloatField(null=True, blank=True)
pyjulia_run_reopt_seconds_bau = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class ScenarioModel(models.Model):
# Inputs
# user = models.ForeignKey(User, null=True, blank=True)
run_uuid = models.UUIDField(unique=True)
api_version = models.TextField(null=True, blank=True)
user_uuid = models.TextField(null=True, blank=True)
webtool_uuid = models.TextField(null=True, blank=True)
job_type = models.TextField(null=True, blank=True)
description = models.TextField(null=True, blank=True)
status = models.TextField(null=True, blank=True)
timeout_seconds = models.IntegerField(null=True, blank=True)
time_steps_per_hour = models.IntegerField(null=True, blank=True)
created = models.DateTimeField(auto_now_add=True)
optimality_tolerance_bau = models.FloatField(null=True, blank=True)
optimality_tolerance_techs = models.FloatField(null=True, blank=True)
add_soc_incentive = models.BooleanField(null=True, blank=True)
lower_bound = models.FloatField(null=True, blank=True)
optimality_gap = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class SiteModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
address = models.TextField(null=True, blank=True)
latitude = models.FloatField(null=True, blank=True)
longitude = models.FloatField(null=True, blank=True)
land_acres = models.FloatField(null=True, blank=True)
roof_squarefeet = models.FloatField(null=True, blank=True)
year_one_emissions_lb_C02 = models.FloatField(null=True, blank=True)
year_one_emissions_bau_lb_C02 = models.FloatField(null=True, blank=True)
outdoor_air_temp_degF = ArrayField(models.FloatField(blank=True, null=True), default=list, null=True)
elevation_ft = models.FloatField(null=True, blank=True)
renewable_electricity_energy_pct = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class FinancialModel(models.Model):
# Input
run_uuid = models.UUIDField(unique=True)
analysis_years = models.IntegerField(null=True, blank=True)
escalation_pct = models.FloatField(null=True, blank=True)
boiler_fuel_escalation_pct = models.FloatField(null=True, blank=True)
newboiler_fuel_escalation_pct = models.FloatField(null=True, blank=True)
chp_fuel_escalation_pct = models.FloatField(null=True, blank=True)
om_cost_escalation_pct = models.FloatField(null=True, blank=True)
offtaker_discount_pct = models.FloatField(null=True, blank=True)
offtaker_tax_pct = models.FloatField(null=True, blank=True)
value_of_lost_load_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
microgrid_upgrade_cost_pct = models.FloatField(null=True, blank=True)
third_party_ownership = models.BooleanField(null=True, blank=True)
owner_discount_pct = models.FloatField(null=True, blank=True)
owner_tax_pct = models.FloatField(null=True, blank=True)
# Outputs
lcc_us_dollars = models.FloatField(null=True, blank=True)
lcc_bau_us_dollars = models.FloatField(null=True, blank=True)
npv_us_dollars = models.FloatField(null=True, blank=True)
net_capital_costs_plus_om_us_dollars = models.FloatField(null=True, blank=True)
avoided_outage_costs_us_dollars = models.FloatField(null=True, blank=True)
microgrid_upgrade_cost_us_dollars = models.FloatField(null=True, blank=True)
net_capital_costs = models.FloatField(null=True, blank=True)
net_om_us_dollars_bau = models.FloatField(null=True, blank=True)
initial_capital_costs = models.FloatField(null=True, blank=True)
replacement_costs = models.FloatField(null=True, blank=True)
om_and_replacement_present_cost_after_tax_us_dollars = models.FloatField(null=True, blank=True)
initial_capital_costs_after_incentives = models.FloatField(null=True, blank=True)
total_om_costs_us_dollars = models.FloatField(null=True, blank=True)
total_om_costs_bau_us_dollars = models.FloatField(null=True, blank=True)
year_one_om_costs_us_dollars = models.FloatField(null=True, blank=True)
year_one_om_costs_before_tax_us_dollars = models.FloatField(null=True, blank=True)
year_one_om_costs_before_tax_bau_us_dollars = models.FloatField(null=True, blank=True)
simple_payback_years = models.FloatField(null=True, blank=True)
irr_pct = models.FloatField(null=True, blank=True)
net_present_cost_us_dollars = models.FloatField(null=True, blank=True)
annualized_payment_to_third_party_us_dollars = models.FloatField(null=True, blank=True)
offtaker_annual_free_cashflow_series_us_dollars = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True)
offtaker_discounted_annual_free_cashflow_series_us_dollars = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True)
offtaker_annual_free_cashflow_series_bau_us_dollars = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True)
offtaker_discounted_annual_free_cashflow_series_bau_us_dollars = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True)
developer_annual_free_cashflow_series_us_dollars = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True)
developer_om_and_replacement_present_cost_after_tax_us_dollars = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class LoadProfileModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
doe_reference_name = ArrayField(
models.TextField(null=True, blank=True), default=list, null=True)
annual_kwh = models.FloatField(null=True, blank=True)
percent_share = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
year = models.IntegerField(null=True, blank=True)
monthly_totals_kwh = ArrayField(models.FloatField(blank=True), default=list, null=True)
loads_kw = ArrayField(models.FloatField(blank=True), default=list, null=True)
critical_loads_kw = ArrayField(models.FloatField(blank=True), default=list, null=True)
loads_kw_is_net = models.BooleanField(null=True, blank=True)
critical_loads_kw_is_net = models.BooleanField(null=True, blank=True)
outage_start_time_step = models.IntegerField(null=True, blank=True)
outage_end_time_step = models.IntegerField(null=True, blank=True)
critical_load_pct = models.FloatField(null=True, blank=True)
outage_is_major_event = models.BooleanField(null=True, blank=True)
#Outputs
year_one_electric_load_series_kw = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
critical_load_series_kw = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
annual_calculated_kwh = models.FloatField(null=True, blank=True)
sustain_hours = models.IntegerField(null=True, blank=True)
bau_sustained_time_steps = models.IntegerField(null=True, blank=True)
resilience_check_flag = models.BooleanField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class LoadProfileBoilerFuelModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
annual_mmbtu = models.FloatField(null=True, blank=True)
monthly_mmbtu = ArrayField(models.FloatField(blank=True),default=list, null=True)
loads_mmbtu_per_hour = ArrayField(models.FloatField(blank=True), default=list, null=True)
doe_reference_name = ArrayField(
models.TextField(null=True, blank=True), default=list, null=True)
percent_share = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
addressable_load_fraction = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
space_heating_fraction_of_heating_load = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
# Outputs
annual_calculated_boiler_fuel_load_mmbtu_bau = models.FloatField(null=True, blank=True)
year_one_boiler_fuel_load_series_mmbtu_per_hr = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
year_one_boiler_thermal_load_series_mmbtu_per_hr = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class LoadProfileChillerThermalModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
loads_ton = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
annual_tonhour = models.FloatField(null=True, blank=True)
monthly_tonhour = ArrayField(models.FloatField(blank=True), default=list, null=True)
annual_fraction = models.FloatField(null=True, blank=True)
monthly_fraction = ArrayField(models.FloatField(blank=True), default=list, null=True)
loads_fraction = ArrayField(models.FloatField(blank=True), default=list, null=True)
doe_reference_name = ArrayField(
models.TextField(null=True, blank=True), default=list, null=True)
percent_share = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
chiller_cop = models.FloatField(null=True, blank=True)
# Outputs
annual_calculated_kwh_bau = models.FloatField(blank=True, null=True)
year_one_chiller_electric_load_series_kw_bau = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
year_one_chiller_electric_load_series_kw = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
year_one_chiller_thermal_load_series_ton = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class ElectricTariffModel(models.Model):
#Inputs
run_uuid = models.UUIDField(unique=True)
urdb_utility_name = models.TextField(null=True, blank=True)
urdb_rate_name = models.TextField(null=True, blank=True)
urdb_label = models.TextField(null=True, blank=True)
blended_monthly_rates_us_dollars_per_kwh = ArrayField(models.FloatField(blank=True), default=list, null=True)
blended_monthly_demand_charges_us_dollars_per_kw = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
blended_annual_rates_us_dollars_per_kwh = models.FloatField(blank=True, default=0, null=True)
blended_annual_demand_charges_us_dollars_per_kw = models.FloatField(blank=True, default=0, null=True)
net_metering_limit_kw = models.FloatField(null=True, blank=True)
interconnection_limit_kw = models.FloatField(null=True, blank=True)
wholesale_rate_us_dollars_per_kwh = ArrayField(models.FloatField(null=True, blank=True))
wholesale_rate_above_site_load_us_dollars_per_kwh = ArrayField(
models.FloatField(null=True, blank=True, default=list))
urdb_response = PickledObjectField(null=True, editable=True)
add_blended_rates_to_urdb_rate = models.BooleanField(null=True, blank=True)
add_tou_energy_rates_to_urdb_rate = models.BooleanField(null=True, blank=True)
tou_energy_rates_us_dollars_per_kwh = ArrayField(models.FloatField(null=True, blank=True), null=True, default=list)
emissions_factor_series_lb_CO2_per_kwh = ArrayField(models.FloatField(null=True, blank=True), null=True, default=list)
chp_standby_rate_us_dollars_per_kw_per_month = models.FloatField(blank=True, null=True)
chp_does_not_reduce_demand_charges = models.BooleanField(null=True, blank=True)
emissions_region = models.TextField(null=True, blank=True)
coincident_peak_load_active_timesteps = ArrayField(ArrayField(models.FloatField(null=True, blank=True), null=True, default=list), null=True, default=list)
coincident_peak_load_charge_us_dollars_per_kw = ArrayField(models.FloatField(null=True, blank=True), null=True, default=list)
# Ouptuts
year_one_energy_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_demand_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_fixed_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_min_charge_adder_us_dollars = models.FloatField(null=True, blank=True)
year_one_energy_cost_bau_us_dollars = models.FloatField(null=True, blank=True)
year_one_demand_cost_bau_us_dollars = models.FloatField(null=True, blank=True)
year_one_fixed_cost_bau_us_dollars = models.FloatField(null=True, blank=True)
year_one_min_charge_adder_bau_us_dollars = models.FloatField(null=True, blank=True)
total_energy_cost_us_dollars = models.FloatField(null=True, blank=True)
total_demand_cost_us_dollars = models.FloatField(null=True, blank=True)
total_fixed_cost_us_dollars = models.FloatField(null=True, blank=True)
total_min_charge_adder_us_dollars = models.FloatField(null=True, blank=True)
total_energy_cost_bau_us_dollars = models.FloatField(null=True, blank=True)
total_demand_cost_bau_us_dollars = models.FloatField(null=True, blank=True)
total_fixed_cost_bau_us_dollars = models.FloatField(null=True, blank=True)
total_export_benefit_us_dollars = models.FloatField(null=True, blank=True)
total_export_benefit_bau_us_dollars = models.FloatField(null=True, blank=True)
total_min_charge_adder_bau_us_dollars = models.FloatField(null=True, blank=True)
year_one_bill_us_dollars = models.FloatField(null=True, blank=True)
year_one_bill_bau_us_dollars = models.FloatField(null=True, blank=True)
year_one_export_benefit_us_dollars = models.FloatField(null=True, blank=True)
year_one_export_benefit_bau_us_dollars = models.FloatField(null=True, blank=True)
year_one_energy_cost_series_us_dollars_per_kwh = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_demand_cost_series_us_dollars_per_kw = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_to_load_series_kw = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_to_load_series_bau_kw = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_to_battery_series_kw = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_energy_supplied_kwh = models.FloatField(null=True, blank=True)
year_one_energy_supplied_kwh_bau = models.FloatField(null=True, blank=True)
year_one_emissions_lb_C02 = models.FloatField(null=True, blank=True)
year_one_emissions_bau_lb_C02 = models.FloatField(null=True, blank=True)
year_one_coincident_peak_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_coincident_peak_cost_bau_us_dollars = models.FloatField(null=True, blank=True)
total_coincident_peak_cost_us_dollars = models.FloatField(null=True, blank=True)
total_coincident_peak_cost_bau_us_dollars = models.FloatField(null=True, blank=True)
year_one_chp_standby_cost_us_dollars = models.FloatField(null=True, blank=True)
total_chp_standby_cost_us_dollars = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class FuelTariffModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
existing_boiler_fuel_type = models.TextField(null=True, blank=True)
boiler_fuel_blended_annual_rates_us_dollars_per_mmbtu = models.FloatField(null=True, blank=True)
boiler_fuel_blended_monthly_rates_us_dollars_per_mmbtu = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
chp_fuel_type = models.TextField(null=True, blank=True)
chp_fuel_blended_annual_rates_us_dollars_per_mmbtu = models.FloatField(null=True, blank=True)
chp_fuel_blended_monthly_rates_us_dollars_per_mmbtu = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
newboiler_fuel_type = models.TextField(null=True, blank=True)
newboiler_fuel_blended_annual_rates_us_dollars_per_mmbtu = models.FloatField(null=True, blank=True)
newboiler_fuel_blended_monthly_rates_us_dollars_per_mmbtu = ArrayField(models.FloatField(null=True, blank=True), default=list, null=True)
# Outputs
total_boiler_fuel_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_boiler_fuel_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_boiler_fuel_cost_bau_us_dollars = models.FloatField(null=True, blank=True)
total_chp_fuel_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_chp_fuel_cost_us_dollars = models.FloatField(null=True, blank=True)
total_boiler_fuel_cost_bau_us_dollars = models.FloatField(null=True, blank=True)
total_newboiler_fuel_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_newboiler_fuel_cost_us_dollars = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class PVModel(models.Model):
#Inputs
run_uuid = models.UUIDField(unique=False)
existing_kw = models.FloatField(null=True, blank=True)
min_kw = models.FloatField(null=True, blank=True)
max_kw = models.FloatField(null=True, blank=True)
installed_cost_us_dollars_per_kw = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_kw = models.FloatField(null=True, blank=True)
macrs_option_years = models.IntegerField(null=True, blank=True)
macrs_bonus_pct = models.FloatField(null=True, blank=True)
macrs_itc_reduction = models.FloatField(null=True, blank=True)
federal_itc_pct = models.FloatField(null=True, blank=True)
state_ibi_pct = models.FloatField(null=True, blank=True)
state_ibi_max_us_dollars = models.FloatField(null=True, blank=True)
utility_ibi_pct = models.FloatField(null=True, blank=True)
utility_ibi_max_us_dollars = models.FloatField(null=True, blank=True)
federal_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
state_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
state_rebate_max_us_dollars = models.FloatField(null=True, blank=True)
utility_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
utility_rebate_max_us_dollars = models.FloatField(null=True, blank=True)
pbi_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
pbi_max_us_dollars = models.FloatField(null=True, blank=True)
pbi_years = models.FloatField(null=True, blank=True)
pbi_system_max_kw = models.FloatField(null=True, blank=True)
degradation_pct = models.FloatField(null=True, blank=True)
azimuth = models.FloatField(null=True, blank=True)
losses = models.FloatField(null=True, blank=True)
array_type = models.IntegerField(null=True, blank=True)
module_type = models.IntegerField(null=True, blank=True)
gcr = models.FloatField(null=True, blank=True)
dc_ac_ratio = models.FloatField(null=True, blank=True)
inv_eff = models.FloatField(null=True, blank=True)
radius = models.FloatField(null=True, blank=True)
tilt = models.FloatField(null=True, blank=True)
prod_factor_series_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True)
pv_number = models.IntegerField(null=True, blank=True)
pv_name = models.TextField(null=True, blank=True)
location = models.TextField(null=True, blank=True)
can_net_meter = models.BooleanField(null=True, blank=True)
can_wholesale = models.BooleanField(null=True, blank=True)
can_export_beyond_site_load = models.BooleanField(null=True, blank=True)
can_curtail = models.BooleanField(null=True, blank=True)
# Outputs
size_kw = models.FloatField(null=True, blank=True)
station_latitude = models.FloatField(null=True, blank=True)
station_longitude = models.FloatField(null=True, blank=True)
station_distance_km = models.FloatField(null=True, blank=True)
average_yearly_energy_produced_kwh = models.FloatField(null=True, blank=True)
average_yearly_energy_produced_bau_kwh = models.FloatField(null=True, blank=True)
average_yearly_energy_exported_kwh = models.FloatField(null=True, blank=True)
year_one_energy_produced_kwh = models.FloatField(null=True, blank=True)
year_one_energy_produced_bau_kwh = models.FloatField(null=True, blank=True)
year_one_power_production_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
year_one_to_battery_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
year_one_to_load_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
year_one_to_grid_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
existing_pv_om_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_curtailed_production_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
existing_pv_om_cost_us_dollars = models.FloatField(null=True, blank=True)
lcoe_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class WindModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
size_class = models.TextField(null=True, blank=True)
wind_meters_per_sec = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
wind_direction_degrees = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
temperature_celsius = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
pressure_atmospheres = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
min_kw = models.FloatField(null=True, blank=True)
max_kw = models.FloatField(null=True, blank=True)
installed_cost_us_dollars_per_kw = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_kw = models.FloatField(null=True, blank=True)
macrs_option_years = models.IntegerField(null=True, blank=True)
macrs_bonus_pct = models.FloatField(null=True, blank=True)
macrs_itc_reduction = models.FloatField(null=True, blank=True)
federal_itc_pct = models.FloatField(null=True, blank=True)
state_ibi_pct = models.FloatField(null=True, blank=True)
state_ibi_max_us_dollars = models.FloatField(null=True, blank=True)
utility_ibi_pct = models.FloatField(null=True, blank=True)
utility_ibi_max_us_dollars = models.FloatField(null=True, blank=True)
federal_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
state_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
state_rebate_max_us_dollars = models.FloatField(null=True, blank=True)
utility_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
utility_rebate_max_us_dollars = models.FloatField(null=True, blank=True)
pbi_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
pbi_max_us_dollars = models.FloatField(null=True, blank=True)
pbi_years = models.FloatField(null=True, blank=True)
pbi_system_max_kw = models.FloatField(null=True, blank=True)
prod_factor_series_kw = ArrayField(
models.FloatField(blank=True), default=list, null=True)
can_net_meter = models.BooleanField(null=True, blank=True)
can_wholesale = models.BooleanField(null=True, blank=True)
can_export_beyond_site_load = models.BooleanField(null=True, blank=True)
can_curtail = models.BooleanField(null=True, blank=True)
# Outputs
size_kw = models.FloatField(null=True, blank=True)
average_yearly_energy_produced_kwh = models.FloatField(null=True, blank=True)
average_yearly_energy_exported_kwh = models.FloatField(null=True, blank=True)
year_one_energy_produced_kwh = models.FloatField(null=True, blank=True)
year_one_power_production_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
year_one_to_battery_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
year_one_to_load_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
year_one_to_grid_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
lcoe_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
year_one_curtailed_production_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class StorageModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
min_kw = models.FloatField(null=True, blank=True)
max_kw = models.FloatField(null=True, blank=True)
min_kwh = models.FloatField(null=True, blank=True)
max_kwh = models.FloatField(null=True, blank=True)
internal_efficiency_pct = models.FloatField(null=True, blank=True)
inverter_efficiency_pct = models.FloatField(null=True, blank=True)
rectifier_efficiency_pct = models.FloatField(null=True, blank=True)
soc_min_pct = models.FloatField(null=True, blank=True)
soc_init_pct = models.FloatField(null=True, blank=True)
canGridCharge = models.BooleanField(null=True, blank=True)
installed_cost_us_dollars_per_kw = models.FloatField(null=True, blank=True)
installed_cost_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
replace_cost_us_dollars_per_kw = models.FloatField(null=True, blank=True)
replace_cost_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
inverter_replacement_year = models.IntegerField(null=True, blank=True)
battery_replacement_year = models.IntegerField(null=True, blank=True)
macrs_option_years = models.IntegerField(null=True, blank=True)
macrs_bonus_pct = models.FloatField(null=True, blank=True)
macrs_itc_reduction = models.FloatField(null=True, blank=True)
total_itc_pct = models.FloatField(null=True, blank=True)
total_rebate_us_dollars_per_kw = models.IntegerField(null=True, blank=True)
total_rebate_us_dollars_per_kwh = models.IntegerField(null=True, blank=True)
# Outputs
size_kw = models.FloatField(null=True, blank=True)
size_kwh = models.FloatField(null=True, blank=True)
year_one_to_load_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
year_one_to_grid_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
year_one_soc_series_pct = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class GeneratorModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
existing_kw = models.FloatField(null=True, blank=True)
min_kw = models.FloatField(null=True, blank=True)
max_kw = models.FloatField(null=True, blank=True)
installed_cost_us_dollars_per_kw = models.FloatField(null=True, blank=True,)
om_cost_us_dollars_per_kw = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
diesel_fuel_cost_us_dollars_per_gallon = models.FloatField(null=True, blank=True)
fuel_slope_gal_per_kwh = models.FloatField(null=True, blank=True)
fuel_intercept_gal_per_hr = models.FloatField(null=True, blank=True)
fuel_avail_gal = models.FloatField(null=True, blank=True)
min_turn_down_pct = models.FloatField(null=True, blank=True)
generator_only_runs_during_grid_outage = models.BooleanField(null=True, blank=True)
generator_sells_energy_back_to_grid = models.BooleanField(null=True, blank=True)
macrs_option_years = models.IntegerField(null=True, blank=True)
macrs_bonus_pct = models.FloatField(null=True, blank=True)
macrs_itc_reduction = models.FloatField(null=True, blank=True)
federal_itc_pct = models.FloatField(null=True, blank=True)
state_ibi_pct = models.FloatField(null=True, blank=True)
state_ibi_max_us_dollars = models.FloatField(null=True, blank=True)
utility_ibi_pct = models.FloatField(null=True, blank=True)
utility_ibi_max_us_dollars = models.FloatField(null=True, blank=True)
federal_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
state_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
state_rebate_max_us_dollars = models.FloatField(null=True, blank=True)
utility_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
utility_rebate_max_us_dollars = models.FloatField(null=True, blank=True)
pbi_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
pbi_max_us_dollars = models.FloatField(null=True, blank=True)
pbi_years = models.FloatField(null=True, blank=True)
pbi_system_max_kw = models.FloatField(null=True, blank=True)
emissions_factor_lb_CO2_per_gal = models.FloatField(null=True, blank=True)
can_net_meter = models.BooleanField(null=True, blank=True)
can_wholesale = models.BooleanField(null=True, blank=True)
can_export_beyond_site_load = models.BooleanField(null=True, blank=True)
can_curtail = models.BooleanField(null=True, blank=True)
# Outputs
fuel_used_gal = models.FloatField(null=True, blank=True)
fuel_used_gal_bau = models.FloatField(null=True, blank=True)
size_kw = models.FloatField(null=True, blank=True)
average_yearly_energy_produced_kwh = models.FloatField(null=True, blank=True)
average_yearly_energy_exported_kwh = models.FloatField(null=True, blank=True)
year_one_energy_produced_kwh = models.FloatField(null=True, blank=True)
year_one_power_production_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
year_one_to_battery_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True)
year_one_to_load_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
year_one_to_grid_series_kw = ArrayField(
models.FloatField(null=True, blank=True), null=True, blank=True, default=list)
year_one_variable_om_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_fuel_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_fixed_om_cost_us_dollars = models.FloatField(null=True, blank=True)
total_variable_om_cost_us_dollars = models.FloatField(null=True, blank=True)
total_fuel_cost_us_dollars = models.FloatField(null=True, blank=True)
total_fixed_om_cost_us_dollars = models.FloatField(null=True, blank=True)
existing_gen_year_one_variable_om_cost_us_dollars = models.FloatField(null=True, blank=True)
existing_gen_year_one_fuel_cost_us_dollars = models.FloatField(null=True, blank=True)
existing_gen_total_variable_om_cost_us_dollars = models.FloatField(null=True, blank=True)
existing_gen_total_fuel_cost_us_dollars = models.FloatField(null=True, blank=True)
existing_gen_total_fixed_om_cost_us_dollars = models.FloatField(null=True, blank=True)
year_one_emissions_lb_C02 = models.FloatField(null=True, blank=True)
year_one_emissions_bau_lb_C02 = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class CHPModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
prime_mover = models.TextField(null=True, blank=True)
size_class = models.IntegerField(null=True, blank=True)
min_kw = models.FloatField(null=True, blank=True)
max_kw = models.FloatField(null=True, blank=True)
min_allowable_kw = models.FloatField(null=True, blank=True)
installed_cost_us_dollars_per_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True)
tech_size_for_cost_curve = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True)
om_cost_us_dollars_per_kw = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_hr_per_kw_rated = models.FloatField(null=True, blank=True)
elec_effic_full_load = models.FloatField(null=True, blank=True)
elec_effic_half_load = models.FloatField(null=True, blank=True)
min_turn_down_pct = models.FloatField(null=True, blank=True)
thermal_effic_full_load = models.FloatField(null=True, blank=True)
thermal_effic_half_load = models.FloatField(null=True, blank=True)
supplementary_firing_capital_cost_per_kw = models.FloatField(null=True, blank=True)
supplementary_firing_max_steam_ratio = models.FloatField(null=True, blank=True)
supplementary_firing_efficiency = models.FloatField(null=True, blank=True)
macrs_option_years = models.IntegerField(null=True, blank=True)
macrs_bonus_pct = models.FloatField(null=True, blank=True)
macrs_itc_reduction = models.FloatField(null=True, blank=True)
federal_itc_pct = models.FloatField(null=True, blank=True)
state_ibi_pct = models.FloatField(null=True, blank=True)
state_ibi_max_us_dollars = models.FloatField(null=True, blank=True)
utility_ibi_pct = models.FloatField(null=True, blank=True)
utility_ibi_max_us_dollars = models.FloatField(null=True, blank=True)
federal_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
state_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
state_rebate_max_us_dollars = models.FloatField(null=True, blank=True)
utility_rebate_us_dollars_per_kw = models.FloatField(null=True, blank=True)
utility_rebate_max_us_dollars = models.FloatField(null=True, blank=True)
pbi_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
pbi_max_us_dollars = models.FloatField(null=True, blank=True)
pbi_years = models.FloatField(null=True, blank=True)
pbi_system_max_kw = models.FloatField(null=True, blank=True)
emissions_factor_lb_CO2_per_mmbtu = models.FloatField(null=True, blank=True)
use_default_derate = models.BooleanField(null=True, blank=True)
max_derate_factor = models.FloatField(null=True, blank=True)
derate_start_temp_degF = models.FloatField(null=True, blank=True)
derate_slope_pct_per_degF = models.FloatField(null=True, blank=True)
chp_unavailability_periods = ArrayField(
PickledObjectField(null=True, editable=True), null=True)
can_net_meter = models.BooleanField(null=True, blank=True)
can_wholesale = models.BooleanField(null=True, blank=True)
can_export_beyond_site_load = models.BooleanField(null=True, blank=True)
can_curtail = models.BooleanField(null=True, blank=True)
cooling_thermal_factor = models.FloatField(null=True, blank=True)
can_supply_steam_turbine = models.BooleanField(null=True, blank=True)
# Outputs
size_kw = models.FloatField(null=True, blank=True)
size_supplementary_firing_kw = models.FloatField(null=True, blank=True)
year_one_fuel_used_mmbtu = models.FloatField(null=True, blank=True)
year_one_electric_energy_produced_kwh = models.FloatField(null=True, blank=True)
year_one_thermal_energy_produced_mmbtu = models.FloatField(null=True, blank=True)
year_one_electric_production_series_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_to_battery_series_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_to_load_series_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_to_grid_series_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_load_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_tes_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_steamturbine_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_waste_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_emissions_lb_C02 = models.FloatField(null=True, blank=True)
year_one_emissions_bau_lb_C02 = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class AbsorptionChillerModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
min_ton = models.FloatField(null=True, blank=True)
max_ton = models.FloatField(null=True, blank=True)
chiller_cop = models.FloatField(null=True, blank=True)
chiller_elec_cop = models.FloatField(null=True, blank=True)
installed_cost_us_dollars_per_ton = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_ton = models.FloatField(null=True, blank=True)
macrs_option_years = models.IntegerField(null=True, blank=True)
macrs_bonus_pct = models.FloatField(null=True, blank=True)
# Outputs
size_ton = models.FloatField(null=True, blank=True)
year_one_absorp_chl_thermal_to_load_series_ton = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_absorp_chl_thermal_to_tes_series_ton = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_absorp_chl_thermal_consumption_series_mmbtu_per_hr = ArrayField(
models.FloatField(null=True, default=list, blank=True), null=True, blank=True)
year_one_absorp_chl_thermal_consumption_mmbtu = models.FloatField(null=True, blank=True)
year_one_absorp_chl_thermal_production_tonhr = models.FloatField(null=True, blank=True)
year_one_absorp_chl_electric_consumption_series_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_absorp_chl_electric_consumption_kwh = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class BoilerModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
max_thermal_factor_on_peak_load = models.FloatField(null=True, blank=True)
existing_boiler_production_type_steam_or_hw = models.TextField(null=True, blank=True)
boiler_efficiency = models.FloatField(blank=True, default=0, null=True)
emissions_factor_lb_CO2_per_mmbtu = models.FloatField(null=True, blank=True)
can_supply_steam_turbine = models.BooleanField(null=True, blank=True)
# Outputs
year_one_boiler_fuel_consumption_series_mmbtu_per_hr = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_boiler_thermal_production_series_mmbtu_per_hr = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_load_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_tes_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_steamturbine_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_boiler_fuel_consumption_mmbtu = models.FloatField(null=True, blank=True)
year_one_boiler_thermal_production_mmbtu = models.FloatField(null=True, blank=True)
year_one_emissions_lb_C02 = models.FloatField(null=True, blank=True)
year_one_emissions_bau_lb_C02 = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class ElectricChillerModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
max_thermal_factor_on_peak_load = models.FloatField(null=True, blank=True)
# Outputs
year_one_electric_chiller_thermal_to_load_series_ton = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_electric_chiller_thermal_to_tes_series_ton = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_electric_chiller_electric_consumption_series_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_electric_chiller_electric_consumption_kwh = models.FloatField(null=True, blank=True)
year_one_electric_chiller_thermal_production_tonhr = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class ColdTESModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
min_gal = models.FloatField(null=True, blank=True)
max_gal = models.FloatField(null=True, blank=True)
chilled_supply_water_temp_degF = models.FloatField(null=True, blank=True)
warmed_return_water_temp_degF = models.FloatField(null=True, blank=True)
internal_efficiency_pct = models.FloatField(null=True, blank=True)
soc_min_pct = models.FloatField(null=True, blank=True)
soc_init_pct = models.FloatField(null=True, blank=True)
installed_cost_us_dollars_per_gal = models.FloatField(null=True, blank=True)
thermal_decay_rate_fraction = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_gal = models.FloatField(null=True, blank=True)
macrs_option_years = models.IntegerField(null=True, blank=True)
macrs_bonus_pct = models.FloatField(null=True, blank=True)
# Outputs
size_gal = models.FloatField(null=True, blank=True)
year_one_thermal_from_cold_tes_series_ton = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_cold_tes_soc_series_pct = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class HotTESModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
min_gal = models.FloatField(null=True, blank=True)
max_gal = models.FloatField(null=True, blank=True)
hot_supply_water_temp_degF = models.FloatField(null=True, blank=True)
cooled_return_water_temp_degF = models.FloatField(null=True, blank=True)
internal_efficiency_pct = models.FloatField(null=True, blank=True)
soc_min_pct = models.FloatField(null=True, blank=True)
soc_init_pct = models.FloatField(null=True, blank=True)
installed_cost_us_dollars_per_gal = models.FloatField(null=True, blank=True)
thermal_decay_rate_fraction = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_gal = models.FloatField(null=True, blank=True)
macrs_option_years = models.IntegerField(null=True, blank=True)
macrs_bonus_pct = models.FloatField(null=True, blank=True)
# Outputs
size_gal = models.FloatField(null=True, blank=True)
year_one_thermal_from_hot_tes_series_mmbtu_per_hr = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_hot_tes_soc_series_pct = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class NewBoilerModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
min_mmbtu_per_hr = models.FloatField(null=True, blank=True)
max_mmbtu_per_hr = models.FloatField(null=True, blank=True)
boiler_efficiency = models.FloatField(null=True, blank=True)
can_supply_steam_turbine = models.BooleanField(null=True, blank=True)
installed_cost_us_dollars_per_mmbtu_per_hr = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_mmbtu_per_hr = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_mmbtu = models.FloatField(null=True, blank=True)
emissions_factor_lb_CO2_per_mmbtu = models.FloatField(null=True, blank=True)
macrs_option_years = models.IntegerField(null=True, blank=True)
macrs_bonus_pct = models.FloatField(null=True, blank=True)
# Outputs
size_mmbtu_per_hr = models.FloatField(null=True, blank=True)
year_one_boiler_fuel_consumption_series_mmbtu_per_hr = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_boiler_thermal_production_series_mmbtu_per_hr = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_load_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_tes_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_steamturbine_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_boiler_fuel_consumption_mmbtu = models.FloatField(null=True, blank=True)
year_one_boiler_thermal_production_mmbtu = models.FloatField(null=True, blank=True)
year_one_emissions_lb_C02 = models.FloatField(null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class SteamTurbineModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
size_class = models.IntegerField(null=True, blank=True)
min_kw = models.FloatField(null=True, blank=True)
max_kw = models.FloatField(null=True, blank=True)
electric_produced_to_thermal_consumed_ratio = models.FloatField(null=True, blank=True)
thermal_produced_to_thermal_consumed_ratio = models.FloatField(null=True, blank=True)
is_condensing = models.BooleanField(null=True, blank=True)
inlet_steam_pressure_psig = models.FloatField(null=True, blank=True)
inlet_steam_temperature_degF = models.FloatField(blank=True, null=True)
inlet_steam_superheat_degF = models.FloatField(null=True, blank=True)
outlet_steam_pressure_psig = models.FloatField(null=True, blank=True)
outlet_steam_min_vapor_fraction = models.FloatField(null=True, blank=True)
isentropic_efficiency = models.FloatField(null=True, blank=True)
gearbox_generator_efficiency = models.FloatField(null=True, blank=True)
net_to_gross_electric_ratio = models.FloatField(null=True, blank=True)
installed_cost_us_dollars_per_kw = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_kw = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_kwh = models.FloatField(null=True, blank=True)
can_net_meter = models.BooleanField(null=True, blank=True)
can_wholesale = models.BooleanField(null=True, blank=True)
can_export_beyond_site_load = models.BooleanField(null=True, blank=True)
can_curtail = models.BooleanField(null=True, blank=True)
macrs_option_years = models.IntegerField(null=True, blank=True)
macrs_bonus_pct = models.FloatField(null=True, blank=True)
# Outputs
size_kw = models.FloatField(null=True, blank=True)
year_one_thermal_consumption_mmbtu = models.FloatField(null=True, blank=True)
year_one_electric_energy_produced_kwh = models.FloatField(null=True, blank=True)
year_one_thermal_energy_produced_mmbtu = models.FloatField(null=True, blank=True)
year_one_thermal_consumption_series_mmbtu_per_hr = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_electric_production_series_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_to_battery_series_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_to_load_series_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_to_grid_series_kw = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_load_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
year_one_thermal_to_tes_series_mmbtu_per_hour = ArrayField(
models.FloatField(null=True, blank=True), default=list, null=True, blank=True)
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class GHPModel(models.Model):
# Inputs
run_uuid = models.UUIDField(unique=True)
require_ghp_purchase = models.BooleanField(null=True, blank=True)
installed_cost_heatpump_us_dollars_per_ton = models.FloatField(null=True, blank=True)
heatpump_capacity_sizing_factor_on_peak_load = models.FloatField(null=True, blank=True)
installed_cost_ghx_us_dollars_per_ft = models.FloatField(null=True, blank=True)
installed_cost_building_hydronic_loop_us_dollars_per_sqft = models.FloatField(null=True, blank=True)
om_cost_us_dollars_per_sqft_year = models.FloatField(null=True, blank=True)
building_sqft = models.FloatField(null=True, blank=True)
ghpghx_inputs = ArrayField(PickledObjectField(null=True, editable=True), null=True, default=list)
ghpghx_response_uuids = ArrayField(models.TextField(null=True, blank=True), default=list, null=True)
can_serve_dhw = models.BooleanField(null=True, blank=True)
macrs_option_years = models.IntegerField(null=True, blank=True)
macrs_bonus_pct = models.FloatField(null=True, blank=True)
macrs_itc_reduction = models.FloatField(null=True, blank=True)
federal_itc_pct = models.FloatField(null=True, blank=True)
state_ibi_pct = models.FloatField(null=True, blank=True)
state_ibi_max_us_dollars = models.FloatField(null=True, blank=True)
utility_ibi_pct = models.FloatField(null=True, blank=True)
utility_ibi_max_us_dollars = models.FloatField(null=True, blank=True)
federal_rebate_us_dollars_per_ton = models.FloatField(null=True, blank=True)
state_rebate_us_dollars_per_ton = models.FloatField(null=True, blank=True)
state_rebate_max_us_dollars = models.FloatField(null=True, blank=True)
utility_rebate_us_dollars_per_ton = models.FloatField(null=True, blank=True)
utility_rebate_max_us_dollars = models.FloatField(null=True, blank=True)
# Outputs
# TODO may make this a UUIDField once it's actually assigned one from the GHPGHX endpoint
ghp_chosen_uuid = models.TextField(null=True, blank=True)
ghpghx_chosen_outputs = PickledObjectField(null=True, editable=True)
size_heat_pump_ton = models.FloatField(null=True, blank=True) # This includes a factor on the peak coincident heating+cooling load
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class MessageModel(models.Model):
"""
For Example:
{"messages":{
"warnings": "This is a warning message.",
"error": "REopt had an error."
}
}
"""
message_type = models.TextField(null=True, blank=True, default='')
message = models.TextField(null=True, blank=True, default='')
run_uuid = models.UUIDField(unique=False)
description = models.TextField(null=True, blank=True, default='')
@classmethod
def create(cls, **kwargs):
obj = cls(**kwargs)
obj.save()
return obj
class BadPost(models.Model):
run_uuid = models.UUIDField(unique=True)
post = models.TextField(null=True, blank=True)
errors = models.TextField(null=True, blank=True)
def save(self, force_insert=False, force_update=False, using=None,
update_fields=None):
try:
super(BadPost, self).save()
except Exception as e:
log.info("Database saving error: {}".format(e.args[0]))
def attribute_inputs(inputs):
return {k: v for k, v in inputs.items() if k[0] == k[0].lower() and v is not None}
class ErrorModel(models.Model):
task = models.TextField(null=True, blank=True, default='')
name = models.TextField(null=True, blank=True, default='')
run_uuid = models.TextField(null=True, blank=True, default='')
user_uuid = models.TextField(null=True, blank=True, default='')
message = models.TextField(null=True, blank=True, default='')
traceback = models.TextField(null=True, blank=True, default='')
created = models.DateTimeField(auto_now_add=True)
class ModelManager(object):
def __init__(self):
self.scenarioM = None
self.siteM = None
self.financialM = None
self.load_profileM = None
self.load_profile_boiler_fuelM = None
self.load_profile_chiller_electricM = None
self.electric_tariffM = None
self.fuel_tariffM = None
self.pvM = None
self.windM = None
self.storageM = None
self.generatorM = None
self.chpM = None
self.boilerM = None
self.electric_chillerM = None
self.absorption_chillerM = None
self.hot_tesM = None
self.cold_tesM = None
self.profileM = None
self.messagesM = None
self.new_boilerM = None
self.steam_turbineM = None
self.ghpM = None
def create_and_save(self, data):
"""
create and save models
saves input json to db tables
:param data: dict, constructed in api.py, mirrors reopt api response structure
"""
d = data["inputs"]['Scenario']
scenario_dict = data["outputs"]['Scenario'].copy()
scenario_dict.update(d)
self.scenarioM = ScenarioModel.create(**attribute_inputs(scenario_dict))
self.profileM = ProfileModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(scenario_dict['Profile']))
self.siteM = SiteModel.create(run_uuid=self.scenarioM.run_uuid, **attribute_inputs(d['Site']))
self.financialM = FinancialModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['Financial']))
self.load_profileM = LoadProfileModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['LoadProfile']))
self.load_profile_boiler_fuelM = LoadProfileBoilerFuelModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['LoadProfileBoilerFuel']))
self.load_profile_chiller_electricM = LoadProfileChillerThermalModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['LoadProfileChillerThermal']))
self.electric_tariffM = ElectricTariffModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['ElectricTariff']))
self.fuel_tariffM = FuelTariffModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['FuelTariff']))
if type(d['Site']['PV'])==list:
self.pvM = [PVModel.create(run_uuid=self.scenarioM.run_uuid, **attribute_inputs(d['Site']['PV'][0]))]
for pv in d['Site']['PV'][1:]:
self.pvM.append(PVModel.create(run_uuid=self.scenarioM.run_uuid, **attribute_inputs(pv)))
if type(d['Site']['PV'])==dict:
self.pvM = PVModel.create(run_uuid=self.scenarioM.run_uuid, **attribute_inputs(d['Site']['PV']))
self.windM = WindModel.create(run_uuid=self.scenarioM.run_uuid, **attribute_inputs(d['Site']['Wind']))
self.storageM = StorageModel.create(run_uuid=self.scenarioM.run_uuid, **attribute_inputs(d['Site']['Storage']))
self.generatorM = GeneratorModel.create(run_uuid=self.scenarioM.run_uuid, **attribute_inputs(d['Site']['Generator']))
self.chpM = CHPModel.create(run_uuid=self.scenarioM.run_uuid, **attribute_inputs(d['Site']['CHP']))
self.boilerM = BoilerModel.create(run_uuid=self.scenarioM.run_uuid, **attribute_inputs(d['Site']['Boiler']))
self.electric_chillerM = ElectricChillerModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['ElectricChiller']))
self.absorption_chillerM = AbsorptionChillerModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['AbsorptionChiller']))
self.hot_tesM = HotTESModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['HotTES']))
self.cold_tesM = ColdTESModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['ColdTES']))
self.new_boilerM = NewBoilerModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['NewBoiler']))
self.steam_turbineM = SteamTurbineModel.create(run_uuid=self.scenarioM.run_uuid,
**attribute_inputs(d['Site']['SteamTurbine']))
self.ghpM = GHPModel.create(run_uuid=self.scenarioM.run_uuid, **attribute_inputs(d['Site']['GHP']))
for message_type, message in data['messages'].items():
MessageModel.create(run_uuid=self.scenarioM.run_uuid, message_type=message_type, message=message)
@staticmethod
def updateModel(modelName, modelData, run_uuid, number=None):
if number==None:
eval(modelName).objects.filter(run_uuid=run_uuid).update(**attribute_inputs(modelData))
else:
if 'PV' in modelName:
eval(modelName).objects.filter(run_uuid=run_uuid, pv_number=number).update(**attribute_inputs(modelData))
@staticmethod
def remove(run_uuid):
"""
remove Scenario from database
:param run_uuid: id of Scenario
:return: None
"""
ScenarioModel.objects.filter(run_uuid=run_uuid).delete()
ProfileModel.objects.filter(run_uuid=run_uuid).delete()
SiteModel.objects.filter(run_uuid=run_uuid).delete()
FinancialModel.objects.filter(run_uuid=run_uuid).delete()
LoadProfileModel.objects.filter(run_uuid=run_uuid).delete()
LoadProfileBoilerFuelModel.objects.filter(run_uuid=run_uuid).delete()
LoadProfileChillerThermalModel.objects.filter(run_uuid=run_uuid).delete()
ElectricTariffModel.objects.filter(run_uuid=run_uuid).delete()
PVModel.objects.filter(run_uuid=run_uuid).delete()
WindModel.objects.filter(run_uuid=run_uuid).delete()
StorageModel.objects.filter(run_uuid=run_uuid).delete()
GeneratorModel.objects.filter(run_uuid=run_uuid).delete()
CHPModel.objects.filter(run_uuid=run_uuid).delete()
BoilerModel.objects.filter(run_uuid=run_uuid).delete()
ElectricChillerModel.objects.filter(run_uuid=run_uuid).delete()
AbsorptionChillerModel.objects.filter(run_uuid=run_uuid).delete()
HotTESModel.objects.filter(run_uuid=run_uuid).delete()
ColdTESModel.objects.filter(run_uuid=run_uuid).delete()
NewBoilerModel.objects.filter(run_uuid=run_uuid).delete()
SteamTurbineModel.objects.filter(run_uuid=run_uuid).delete()
GHPModel.objects.filter(run_uuid=run_uuid).delete()
MessageModel.objects.filter(run_uuid=run_uuid).delete()
ErrorModel.objects.filter(run_uuid=run_uuid).delete()
@staticmethod
def update(data, run_uuid):
"""
save Scenario results in database
:param data: dict, constructed in api.py, mirrors reopt api response structure
:param model_ids: dict, optional, for use when updating existing models that have not been created in memory
:return: None
"""
d = data["outputs"]["Scenario"]
ProfileModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Profile']))
SiteModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']))
FinancialModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['Financial']))
LoadProfileModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['LoadProfile']))
LoadProfileBoilerFuelModel.objects.filter(run_uuid=run_uuid).update(
**attribute_inputs(d['Site']['LoadProfileBoilerFuel']))
LoadProfileChillerThermalModel.objects.filter(run_uuid=run_uuid).update(
**attribute_inputs(d['Site']['LoadProfileChillerThermal']))
ElectricTariffModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['ElectricTariff']))
FuelTariffModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['FuelTariff']))
if type(d['Site']['PV'])==dict:
PVModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['PV']))
if type(d['Site']['PV']) == list:
for pv in d['Site']['PV']:
PVModel.objects.filter(run_uuid=run_uuid, pv_number=pv['pv_number']).update(**attribute_inputs(pv))
WindModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['Wind']))
StorageModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['Storage']))
GeneratorModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['Generator']))
CHPModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['CHP']))
BoilerModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['Boiler']))
ElectricChillerModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['ElectricChiller']))
AbsorptionChillerModel.objects.filter(run_uuid=run_uuid).update(
**attribute_inputs(d['Site']['AbsorptionChiller']))
HotTESModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['HotTES']))
ColdTESModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['ColdTES']))
NewBoilerModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['NewBoiler']))
SteamTurbineModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['SteamTurbine']))
GHPModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d['Site']['GHP']))
for message_type, message in data['messages'].items():
if len(MessageModel.objects.filter(run_uuid=run_uuid, message=message)) > 0:
# message already saved
pass
else:
MessageModel.create(run_uuid=run_uuid, message_type=message_type, message=message)
# Do this last so that the status does not change to optimal before the rest of the results are filled in
ScenarioModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d)) # force_update=True
@staticmethod
def update_scenario_and_messages(data, run_uuid):
"""
save Scenario results in database
:param data: dict, constructed in api.py, mirrors reopt api response structure
:return: None
"""
d = data["outputs"]["Scenario"]
ScenarioModel.objects.filter(run_uuid=run_uuid).update(**attribute_inputs(d))
for message_type, message in data['messages'].items():
if len(MessageModel.objects.filter(run_uuid=run_uuid, message=message)) > 0:
# message already saved
pass
else:
MessageModel.create(run_uuid=run_uuid, message_type=message_type, message=message)
@staticmethod
def add_user_uuid(user_uuid, run_uuid):
"""
update the user_uuid associated with a Scenario
:param user_uuid: string
:param run_uuid: string
:return: None
"""
d = {"user_uuid": user_uuid}
ScenarioModel.objects.filter(run_uuid=run_uuid).update(**d)
ErrorModel.objects.filter(run_uuid=run_uuid).update(**d)
@staticmethod
def make_response(run_uuid):
"""
Reconstruct response dictionary from postgres tables (django models).
NOTE: postgres column type UUID is not JSON serializable. Work-around is removing those columns and then
adding back into outputs->Scenario as string.
:param run_uuid:
:return: nested dictionary matching nested_output_definitions
"""
def remove_number(k, d):
if k in d.keys():
del d[k]
return d
def remove_ids(d):
del d['run_uuid']
del d['id']
return d
def move_outs_to_ins(site_key, resp):
if not (resp['outputs']['Scenario']['Site'].get(site_key) or None) in [None, [], {}]:
resp['inputs']['Scenario']['Site'][site_key] = dict()
for k in nested_input_definitions['Scenario']['Site'][site_key].keys():
try:
if site_key == "PV":
if type(resp['outputs']['Scenario']['Site'][site_key])==dict:
resp['inputs']['Scenario']['Site'][site_key][k] = resp['outputs']['Scenario']['Site'][site_key][k]
del resp['outputs']['Scenario']['Site'][site_key][k]
elif type(resp['outputs']['Scenario']['Site'][site_key])==list:
max_order = max([p.get('pv_number') for p in resp['outputs']['Scenario']['Site'][site_key]])
if resp['inputs']['Scenario']['Site'].get(site_key) == {}:
resp['inputs']['Scenario']['Site'][site_key] = []
if len(resp['inputs']['Scenario']['Site'][site_key]) == 0 :
for _ in range(max_order):
resp['inputs']['Scenario']['Site'][site_key].append({})
for i in range(max_order):
resp['inputs']['Scenario']['Site'][site_key][i][k] = resp['outputs']['Scenario']['Site'][site_key][i][k]
if isinstance(resp['inputs']['Scenario']['Site'][site_key][i][k], list):
if len(resp['inputs']['Scenario']['Site'][site_key][i][k]) == 1:
resp['inputs']['Scenario']['Site'][site_key][i][k] = \
resp['inputs']['Scenario']['Site'][site_key][i][k][0]
if k not in ['pv_name']:
del resp['outputs']['Scenario']['Site'][site_key][i][k]
# special handling for inputs that can be scalar or array,
# (which we have to make an array in database)
else:
resp['inputs']['Scenario']['Site'][site_key][k] = resp['outputs']['Scenario']['Site'][site_key][k]
if isinstance(resp['inputs']['Scenario']['Site'][site_key][k], list):
if len(resp['inputs']['Scenario']['Site'][site_key][k]) == 1:
resp['inputs']['Scenario']['Site'][site_key][k] = \
resp['inputs']['Scenario']['Site'][site_key][k][0]
elif len(resp['inputs']['Scenario']['Site'][site_key][k]) == 0:
del resp['inputs']['Scenario']['Site'][site_key][k]
del resp['outputs']['Scenario']['Site'][site_key][k]
except KeyError: # known exception for k = urdb_response (user provided blended rates)
resp['inputs']['Scenario']['Site'][site_key][k] = None
# add try/except for get fail / bad run_uuid
site_keys = ['PV', 'Storage', 'Financial', 'LoadProfile', 'LoadProfileBoilerFuel', 'LoadProfileChillerThermal',
'ElectricTariff', 'FuelTariff', 'Generator', 'Wind', 'CHP', 'Boiler', 'ElectricChiller',
'AbsorptionChiller', 'HotTES', 'ColdTES', 'NewBoiler', 'SteamTurbine', 'GHP']
resp = dict()
resp['outputs'] = dict()
resp['outputs']['Scenario'] = dict()
resp['outputs']['Scenario']['Profile'] = dict()
resp['inputs'] = dict()
resp['inputs']['Scenario'] = dict()
resp['inputs']['Scenario']['Site'] = dict()
resp['messages'] = dict()
try:
scenario_model = ScenarioModel.objects.get(run_uuid=run_uuid)
except Exception as e:
if isinstance(e, models.ObjectDoesNotExist):
resp['messages']['error'] = (
"run_uuid {} not in database. "
"You may have hit the results endpoint too quickly after POST'ing scenario, "
"you may have a typo in your run_uuid, or the scenario was deleted.").format(run_uuid)
resp['outputs']['Scenario']['status'] = 'error'
return resp
else:
raise Exception
scenario_data = remove_ids(model_to_dict(scenario_model))
del scenario_data['job_type']
resp['outputs']['Scenario'] = scenario_data
resp['outputs']['Scenario']['run_uuid'] = str(run_uuid)
resp['outputs']['Scenario']['Site'] = remove_ids(model_to_dict(SiteModel.objects.get(run_uuid=run_uuid)))
financial_record = FinancialModel.objects.filter(run_uuid=run_uuid) or {}
if financial_record is not {}:
resp['outputs']['Scenario']['Site']['Financial'] = remove_ids(model_to_dict(financial_record[0]))
loadprofile_record = LoadProfileModel.objects.filter(run_uuid=run_uuid) or {}
if loadprofile_record is not {}:
resp['outputs']['Scenario']['Site']['LoadProfile'] = remove_ids(model_to_dict(loadprofile_record[0]))
lpbf_record = LoadProfileBoilerFuelModel.objects.filter(run_uuid=run_uuid) or {}
if not lpbf_record == {}:
resp['outputs']['Scenario']['Site']['LoadProfileBoilerFuel'] = remove_ids(model_to_dict(lpbf_record[0]))
lpct_record = LoadProfileChillerThermalModel.objects.filter(run_uuid=run_uuid) or {}
if not lpct_record == {}:
resp['outputs']['Scenario']['Site']['LoadProfileChillerThermal'] = remove_ids(model_to_dict(lpct_record[0]))
etariff_record = ElectricTariffModel.objects.filter(run_uuid=run_uuid) or {}
if not etariff_record == {}:
resp['outputs']['Scenario']['Site']['ElectricTariff'] = remove_ids(model_to_dict(etariff_record[0]))
fueltariff_record = FuelTariffModel.objects.filter(run_uuid=run_uuid) or {}
if not fueltariff_record == {}:
resp['outputs']['Scenario']['Site']['FuelTariff'] = remove_ids(model_to_dict(fueltariff_record[0]))
storage_record = StorageModel.objects.filter(run_uuid=run_uuid) or {}
if not storage_record == {}:
resp['outputs']['Scenario']['Site']['Storage'] = remove_ids(model_to_dict(storage_record[0]))
generator_record = GeneratorModel.objects.filter(run_uuid=run_uuid) or {}
if not generator_record == {}:
resp['outputs']['Scenario']['Site']['Generator'] = remove_ids(model_to_dict(generator_record[0]))
wind_record = WindModel.objects.filter(run_uuid=run_uuid) or {}
if not wind_record == {}:
resp['outputs']['Scenario']['Site']['Wind'] = remove_ids(model_to_dict(wind_record[0]))
chp_record = CHPModel.objects.filter(run_uuid=run_uuid) or {}
if not chp_record == {}:
resp['outputs']['Scenario']['Site']['CHP'] = remove_ids(model_to_dict(chp_record[0]))
boiler_record = BoilerModel.objects.filter(run_uuid=run_uuid) or {}
if not boiler_record == {}:
resp['outputs']['Scenario']['Site']['Boiler'] = remove_ids(model_to_dict(boiler_record[0]))
echiller_record = ElectricChillerModel.objects.filter(run_uuid=run_uuid) or {}
if not echiller_record == {}:
resp['outputs']['Scenario']['Site']['ElectricChiller'] = remove_ids(model_to_dict(echiller_record[0]))
achiller_record = AbsorptionChillerModel.objects.filter(run_uuid=run_uuid) or {}
if not achiller_record == {}:
resp['outputs']['Scenario']['Site']['AbsorptionChiller'] = remove_ids(model_to_dict(achiller_record[0]))
hottes_record = HotTESModel.objects.filter(run_uuid=run_uuid) or {}
if not hottes_record == {}:
resp['outputs']['Scenario']['Site']['HotTES'] = remove_ids(model_to_dict(hottes_record[0]))
coldtes_record = ColdTESModel.objects.filter(run_uuid=run_uuid) or {}
if not coldtes_record == {}:
resp['outputs']['Scenario']['Site']['ColdTES'] = remove_ids(model_to_dict(coldtes_record[0]))
newboiler_record = NewBoilerModel.objects.filter(run_uuid=run_uuid) or {}
if not newboiler_record == {}:
resp['outputs']['Scenario']['Site']['NewBoiler'] = remove_ids(model_to_dict(newboiler_record[0]))
steamturbine_record = SteamTurbineModel.objects.filter(run_uuid=run_uuid) or {}
if not steamturbine_record == {}:
resp['outputs']['Scenario']['Site']['SteamTurbine'] = remove_ids(model_to_dict(steamturbine_record[0]))
ghp_record = GHPModel.objects.filter(run_uuid=run_uuid) or {}
if not ghp_record == {}:
resp['outputs']['Scenario']['Site']['GHP'] = remove_ids(model_to_dict(ghp_record[0]))
resp['outputs']['Scenario']['Site']['PV'] = []
for x in PVModel.objects.filter(run_uuid=run_uuid).order_by('pv_number'):
resp['outputs']['Scenario']['Site']['PV'].append(remove_ids(model_to_dict(x)))
profile_data = ProfileModel.objects.filter(run_uuid=run_uuid)
if len(profile_data) > 0:
resp['outputs']['Scenario']['Profile'] = remove_ids(model_to_dict(profile_data[0]))
for m in MessageModel.objects.filter(run_uuid=run_uuid).values('message_type', 'message'):
resp['messages'][m['message_type']] = m['message']
for scenario_key in nested_input_definitions['Scenario'].keys():
if scenario_key.islower():
resp['inputs']['Scenario'][scenario_key] = resp['outputs']['Scenario'][scenario_key]
del resp['outputs']['Scenario'][scenario_key]
for site_key in nested_input_definitions['Scenario']['Site'].keys():
if site_key.islower():
resp['inputs']['Scenario']['Site'][site_key] = resp['outputs']['Scenario']['Site'][site_key]
del resp['outputs']['Scenario']['Site'][site_key]
elif site_key in site_keys:
move_outs_to_ins(site_key, resp=resp)
if len(resp['inputs']['Scenario']['Site']['PV']) == 1:
resp['inputs']['Scenario']['Site']['PV'] = resp['inputs']['Scenario']['Site']['PV'][0]
resp['outputs']['Scenario']['Site']['PV'] = [remove_number('pv_number', x) for x in resp['outputs']['Scenario']['Site']['PV']]
if len(resp['outputs']['Scenario']['Site']['PV']) == 1:
resp['outputs']['Scenario']['Site']['PV'] = resp['outputs']['Scenario']['Site']['PV'][0]
if resp['inputs']['Scenario']['Site']['LoadProfile'].get('doe_reference_name') == '':
del resp['inputs']['Scenario']['Site']['LoadProfile']['doe_reference_name']
#Preserving Backwards Compatability
resp['inputs']['Scenario']['Site']['LoadProfile']['outage_start_hour'] = resp['inputs']['Scenario']['Site']['LoadProfile'].get('outage_start_time_step')
if resp['inputs']['Scenario']['Site']['LoadProfile']['outage_start_hour'] is not None:
resp['inputs']['Scenario']['Site']['LoadProfile']['outage_start_hour'] -= 1
resp['inputs']['Scenario']['Site']['LoadProfile']['outage_end_hour'] = resp['inputs']['Scenario']['Site']['LoadProfile'].get('outage_end_time_step')
if resp['inputs']['Scenario']['Site']['LoadProfile']['outage_end_hour'] is not None:
resp['inputs']['Scenario']['Site']['LoadProfile']['outage_end_hour'] -= 1
return resp
| 56.556376 | 160 | 0.709502 | 10,889 | 84,269 | 5.233079 | 0.065387 | 0.100381 | 0.149659 | 0.195707 | 0.849271 | 0.811891 | 0.778829 | 0.70421 | 0.654686 | 0.602355 | 0 | 0.001107 | 0.174572 | 84,269 | 1,489 | 161 | 56.594359 | 0.818109 | 0.044607 | 0 | 0.431438 | 0 | 0 | 0.041572 | 0.002571 | 0 | 0 | 0 | 0.000672 | 0 | 1 | 0.030936 | false | 0.001672 | 0.007525 | 0.000836 | 0.620401 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
0ee35089c789c6991d672103400b8eff83945ab5 | 30,808 | py | Python | decryption.py | OverflowShell/OnlineCracking | 53e33d5034adb75efe0775307c28f7d85e680ba5 | [
"Apache-2.0"
] | null | null | null | decryption.py | OverflowShell/OnlineCracking | 53e33d5034adb75efe0775307c28f7d85e680ba5 | [
"Apache-2.0"
] | null | null | null | decryption.py | OverflowShell/OnlineCracking | 53e33d5034adb75efe0775307c28f7d85e680ba5 | [
"Apache-2.0"
] | null | null | null | from urllib.request import urlopen
import hashlib
from requests.exceptions import ConnectionError
import os
import time
import random
import platform
import sys
import argparse
from hashlib import algorithms_available
from time import sleep as sl
if platform.system() != 'Linux':
print("Solo se puede ejecutar en Linux !")
sys.exit()
else:
pass
print("")
colores = {
"M" : "\033[1;31m",
"H" : "\033[1;32m",
"K" : "\033[1;33m",
"U" : "\033[1;34m",
"P" : "\033[1;35m",
"C" : "\033[1;36m",
"W": "\033[1;37m",
"A" : "\033[90m",
}
os.system("clear")
def clear():
os.system("clear")
#passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
class Banner:
def __str__(self):
return colores["C"]+"""
/$$$$$$ /$$ /$$
/$$__ $$ | $$ |__/
| $$ \__/ /$$$$$$ /$$$$$$ /$$$$$$$| $$ /$$ /$$ /$$$$$$$ /$$$$$$
| $$ /$$__ $$|____ $$ /$$_____/| $$ /$$/| $$| $$__ $$ /$$__ $$
| $$ | $$ \__/ /$$$$$$$| $$ | $$$$$$/ | $$| $$ \ $$| $$ \ $$
| $$ $$| $$ /$$__ $$| $$ | $$_ $$ | $$| $$ | $$| $$ | $$
| $$$$$$/| $$ | $$$$$$$| $$$$$$$| $$ \ $$| $$| $$ | $$| $$$$$$$
\______/ |__/ \_______/ \_______/|__/ \__/|__/|__/ |__/ \____ $$
/$$ \ $$
| $$$$$$/
\______/
"""
h = Banner()
clear()
print(h)
sl(0.7)
clear()
sl(0.3)
print(h)
sl(0.3)
clear()
sl(0.3)
print(h)
sl(0.3)
clear()
sl(0.3)
print(h)
sl(1.5)
class users:
def __init__(self):
pass
def agents(self):
list_agent = [
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Crazy Browser 1.0.5)",
"curl/7.7.2 (powerpc-apple-darwin6.0) libcurl 7.7.2 (OpenSSL 0.9.6b)",
"Mozilla/5.0 (X11; U; Linux amd64; en-US; rv:5.0) Gecko/20110619 Firefox/5.0",
"Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0)",
"Opera/9.80 (Windows NT 6.1; U; sv) Presto/2.7.62 Version/11.01",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.872.0 Safari/535.2",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 7.1; Trident/5.0)",
"Opera/9.80 (X11; Linux i686; U; pl) Presto/2.6.30 Version/10.61"
]
random_agent = random.choice(list_agent)
user_agent = {'User-Agent' : random_agent}
def analise():
os.system("clear")
os.system("python3 analizer/hash-id.py ")
def arguments():
parser = argparse.ArgumentParser()
parser.description = 'HashCrack es una herramienta que te permitra crackear muchas familias de algoritmos con sus respectivas versiones. Entre los cuales se encuentran: MD5, SHA (1, 224, 256, 384, 512), SHA-3 (224, 256, 384, 512), blake2b'
primary = parser.add_argument_group('Argumentos principales')
parser.description = " -a --analyze Analiza un has / Analyze a hash"
args = parser.parse_args()
def sha1():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Leaked-Databases/rockyou-20.txt").read(), "utf-8")
for password in passwords.split("\n"):
x = hashlib.sha1(bytes(password, "utf-8")).hexdigest()
if x == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def md5():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
for password in passwords.split("\n"):
y = hashlib.md5(bytes(password, "utf-8")).hexdigest()
if y == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def sha224():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
for password in passwords.split("\n"):
z = hashlib.sha224(bytes(password, "utf-8")).hexdigest()
if z == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def sha256():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
for password in passwords.split("\n"):
m = hashlib.sha256(bytes(password, "utf-8")).hexdigest()
if m == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def sha512():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
for password in passwords.split("\n"):
a = hashlib.sha512(bytes(password, "utf-8")).hexdigest()
if a == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def sha3_256():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
for password in passwords.split("\n"):
d = hashlib.sha3_256(bytes(password, "utf-8")).hexdigest()
if d == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def sha384():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
for password in passwords.split("\n"):
b = hashlib.sha384(bytes(password, "utf-8")).hexdigest()
if b == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def blake2b():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
for password in passwords.split("\n"):
w = hashlib.blake2b(bytes(password, "utf-8")).hexdigest()
if w == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def blake2s():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
for password in passwords.split("\n"):
t = hashlib.blake2s(bytes(password, "utf-8")).hexdigest()
if t == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def ripemd160():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
for password in passwords.split("\n"):
e = hashlib.new('ripemd160')(bytes(password, "utf-8")).hexdigest()
if e == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def sha3_512():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
for password in passwords.split("\n"):
k = hashlib.sha3_512(bytes(password, "utf-8")).hexdigest()
if k == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def sha3_224():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt").read(), "utf-8")
for password in passwords.split("\n"):
s = hashlib.sha3_224(bytes(password, "utf-8")).hexdigest()
if s == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def sha3_256():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-10000.txt").read(), "utf-8")
for password in passwords.split("\n"):
l = hashlib.sha3_256(bytes(password, "utf-8")).hexdigest()
if l == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def sha3_384():
try:
hashes = input(colores["P"]+"Ingresa el texto encriptado: ")
passwords = str(urlopen("https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-10000.txt").read(), "utf-8")
for password in passwords.split("\n"):
n = hashlib.sha3_384(bytes(password, "utf-8")).hexdigest()
if n == hashes:
print(colores["H"]+"[+] La contraseña es: " + str(password))
exit()
else:
print(colores["M"]+"[+] Contraseña: " + str(password) + " no es la contraseña,probando con la siguiente...")
print("\nLa contraseña no esta en este diccionario")
except AttributeError:
pass
except KeyboardInterrupt:
print("\n[+]Proceso detenido por el usuario")
except ConnectionResetError:
print("No se han podido leer las lineas del diccionario,verifica tu conexión a internet")
def hash_analyzer():
os.system("clear")
system = input("¿Esta en una distribución Linux como Kali o Parrot? [Y]|[N] / Is it on a Linux distribution like Kali or Parrot? [Y]|[N] >>> ")
system_LOW = system.lower()
if system_LOW == "y":
os.system("python3 analizer/hash-id.py")
elif system_LOW =="n":
os.system("apt install pip")
os.system("pip install requests")
os.system("python3 analizer/hash-id.py")
def unicode():
#NOTE: No pude conseguir la forma de poner todo esto en un bucle while o for intente hacerlo pero no pude :(
# Por favor no se enoje si no logra conseguir decodificar o codificar por completo su texto a decimal Unicode
#NOTE: Puede codificar y decodificar texto libremente y de una manera más fácil en: https://cryptii.com/pipes/text-decimal
true_false = str(input("Quieres convertir texto en Unicode decimal? [y] / [n]: "))
if true_false == "y":
message = str(input("Ingresa una palabra: "))
lon = int(input("Cuántas letras tiene tu mensaje(debe considerar los espacios): "))
message = message.lower()
try:
if lon == 1:
a = message[0]
a = ord(a)
print(a)
elif lon == 2:
a, b = message[0], message[1]
a, b = ord(a), ord(b)
print(a, b)
elif lon == 3:
a, b, c = message[0], message[1], message[2]
a, b, c = ord(a), ord(b), ord(c)
print(a, b, c)
elif lon == 4:
a, b, c, d = message[0], message[1], message[2], message[3]
a, b, c, d = ord(a), ord(b), ord(c), ord(d)
print(a, b, c, d)
elif lon == 5:
a, b, c, d, e = message[0], message[1], message[2], message[3], message[4]
a, b, c, d, e = ord(a), ord(b), ord(c), ord(d), ord(e)
print(a, b, c, d, e)
elif lon == 6:
a, b, c, d, e, f = message[0], message[1], message[2], message[3], message[4], message[5]
a, b, c, d, e, f = ord(a), ord(b), ord(c), ord(d), ord(e), ord(f)
print(a, b, c, d, e, f)
elif lon == 7:
a, b, c, d, e, f, g = message[0], message[1], message[2], message[3], message[4], message[5], message[6]
a, b, c, d, e, f, g = ord(a), ord(b), ord(c), ord(d), ord(e), ord(f), ord(g)
print(a, b, c, d, e, f, g)
elif lon == 8:
a, b, c, d, e, f, g, h = message[0], message[1], message[2], message[3], message[4], message[5], message[6], message[7]
a, b, c, d, e, f, g, h= ord(a), ord(b), ord(c), ord(d), ord(e), ord(f), ord(g), ord(h)
print(a, b, c, d, e, f, g, h)
elif lon == 9:
a, b, c, d, e, f, g, h, i = message[0], message[1], message[2], message[3], message[4], message[5], message[6], message[7], message[8]
a, b, c, d, e, f, g, h, i = ord(a), ord(b), ord(c), ord(d), ord(e), ord(f), ord(g), ord(h), ord(i)
print(a, b, c, d, e, f, g, h, i)
elif lon == 10:
a, b, c, d, e, f, g, h, i, j = message[0], message[1], message[2], message[3], message[4], message[5], message[6], message[7], message[8], message[9]
a, b, c, d, e, f, g, h, i, j = ord(a), ord(b), ord(c), ord(d), ord(e), ord(f), ord(g), ord(h), ord(i), ord(j)
print(a, b, c, d, e, f, g, h, i, j)
else:
pass
except AttributeError:
pass
elif true_false == "n":
question = str(input("Quieres hacer la desconversion de caracteres Unicode? [y] / [n]: "))
if question == "y":
lon_2 = int(input("Cuántas letras tiene tu mensaje(debe conaiderar los espacios): "))
if lon_2 == 1:
try:
a = int(input("Ingresa el primer parrafo del número: "))
a = chr(a)
print("La palabra o texto en unicode decimal es: ", a)
except ValueError:
print("[+] Debes ingresar números enteros correspondientes al cifrado decimal Unicode")
elif lon_2 == 2:
try:
a = int(input("Ingresa el primer parrafo del número: "))
b = int(input("Ingresa el segundo parrafo del numero: "))
a, b = chr(a), chr(b)
print("La palabra o texto en unicode decimal es: ", a, b)
except ValueError:
print("[+] Debes ingresar números enteros correspondientes al cifrado decimal Unicode")
elif lon_2 == 3:
try:
a = int(input("Ingresa el primer parrafo del número: "))
b = int(input("Ingresa el segundo parrafo del numero: "))
c = int(input("Ingresa el tercer parrafo del numero: "))
a, b, c = chr(a), chr(b), chr(c)
print("La palabra o texto en unicode decimal es: ", a, b, c)
except ValueError:
print("[+] Debes ingresar números enteros correspondientes al cifrado decimal Unicode")
elif lon_2 == 4:
try:
a = int(input("Ingresa el primer parrafo del número: "))
b = int(input("Ingresa el segundo parrafo del numero: "))
c = int(input("Ingresa el tercer parrafo del numero: "))
d = int(input("Ingresa el cuarto párrafo del numero: "))
a, b, c, d = chr(a), chr(b), chr(c), chr(d)
print("La palabra o texto en unicode decimal es: ", a, b, c, d)
except ValueError:
print("[+] Debes ingresar números enteros correspondientes al cifrado decimal Unicode")
elif lon_2 == 5:
try:
a = int(input("Ingresa el primer parrafo del número: "))
b = int(input("Ingresa el segundo parrafo del numero: "))
c = int(input("Ingresa el tercer parrafo del numero: "))
d = int(input("Ingresa el cuarto párrafo del numero: "))
e = int(input("Ingresa el quinto párrafo: "))
a, b, c, d, e = chr(a), chr(b), chr(c), chr(d), chr(e)
print("La palabra o texto en unicode decimal es: ", a, b, c, d, e)
except ValueError:
print("[+] Debes ingresar números enteros correspondientes al cifrado decimal Unicode")
elif lon_2 == 6:
try:
a = int(input("Ingresa el primer parrafo del número: "))
b = int(input("Ingresa el segundo parrafo del numero: "))
c = int(input("Ingresa el tercer parrafo del numero: "))
d = int(input("Ingresa el cuarto párrafo del numero: "))
e = int(input("Ingresa el quinto párrafo del numero: "))
f = int(input("Ingresa el sexto párrafo del numero: "))
a, b, c, d, e, f = chr(a), chr(b), chr(c), chr(d), chr(e), chr(f)
print("La palabra o texto en unicode decimal es: ", a, b, c, d, e, f )
except ValueError:
print("[+] Debes ingresar números enteros correspondientes al cifrado decimal Unicode")
elif lon_2 == 7:
try:
a = int(input("Ingresa el primer parrafo del número: "))
b = int(input("Ingresa el segundo parrafo del numero: "))
c = int(input("Ingresa el tercer parrafo del numero: "))
d = int(input("Ingresa el cuarto párrafo del numero: "))
e = int(input("Ingresa el quinto párrafo del numero: "))
f = int(input("Ingresa el sexto párrafo del numero: "))
g = int(input("Ingresa el septimo párrafo del numero: "))
a, b, c, d, e, f, g = chr(a), chr(b), chr(c), chr(d), chr(e), chr(f), chr(g)
print("La palabra o texto en unicode decimal es: ", a, b, c, d, e, f, g)
except ValueError:
print("[+] Debes ingresar números enteros correspondientes al cifrado decimal Unicode")
elif lon_2 == 8:
try:
a = int(input("Ingresa el primer parrafo del número: "))
b = int(input("Ingresa el segundo parrafo del numero: "))
c = int(input("Ingresa el tercer parrafo del numero: "))
d = int(input("Ingresa el cuarto párrafo del numero: "))
e = int(input("Ingresa el quinto párrafo del numero: "))
f = int(input("Ingresa el sexto párrafo del numero: "))
g = int(input("Ingresa el septimo párrafo del numero: "))
h = int(input("Ingresa el octavo párrafo del numero: "))
a, b, c, d, e, f, g, h = chr(a), chr(b), chr(c), chr(d), chr(e), chr(f), chr(g), chr(h)
print("La palabra o texto en unicode decimal es: ", a, b, c, d, e, f, g, h)
except ValueError:
print("[+] Debes ingresar números enteros correspondientes al cifrado decimal Unicode")
elif lon_2 == 9:
try:
a = int(input("Ingresa el primer parrafo del número: "))
b = int(input("Ingresa el segundo parrafo del numero: "))
c = int(input("Ingresa el tercer parrafo del numero: "))
d = int(input("Ingresa el cuarto párrafo del numero: "))
e = int(input("Ingresa el quinto párrafo del numero: "))
f = int(input("Ingresa el sexto párrafo del numero: "))
g = int(input("Ingresa el septimo párrafo del numero: "))
h = int(input("Ingresa el octavo párrafo del numero: "))
i = int(input("Ingresa el noveno párrafo del numero: "))
a, b, c, d, e, f, g, h, i = chr(a), chr(b), chr(c), chr(d), chr(e), chr(f), chr(g), chr(h), chr(i)
print("La palabra o texto en unicode decimal es: ", a, b, c, d, e, f, g, h, i)
except ValueError:
print("[+] Debes ingresar números enteros correspondientes al cifrado decimal Unicode")
elif lon_2 == 10:
try:
a = int(input("Ingresa el primer parrafo del número: "))
b = int(input("Ingresa el segundo parrafo del numero: "))
c = int(input("Ingresa el tercer parrafo del numero: "))
d = int(input("Ingresa el cuarto párrafo del numero: "))
e = int(input("Ingresa el quinto párrafo del numero: "))
f = int(input("Ingresa el sexto párrafo del numero: "))
g = int(input("Ingresa el septimo párrafo del numero: "))
h = int(input("Ingresa el octavo párrafo del numero: "))
i = int(input("Ingresa el noveno párrafo del numero: "))
j = int(input("Ingresa el decimo párrafo del numero: "))
a, b, c, d, e, f, g, h, i, j = chr(a), chr(b), chr(c), chr(d), chr(e), chr(f), chr(g), chr(h), chr(i), chr(j)
print("La palabra o texto en unicode decimal es: ", a, b, c, d, e, f, g, h, i, j)
except ValueError:
print("[+] Debes ingresar números enteros correspondientes al cifrado decimal Unicode")
elif lon_2 == 11:
try:
a = int(input("Ingresa el primer parrafo del número: "))
b = int(input("Ingresa el segundo parrafo del numero: "))
c = int(input("Ingresa el tercer parrafo del numero: "))
d = int(input("Ingresa el cuarto párrafo del numero: "))
e = int(input("Ingresa el quinto párrafo del numero: "))
f = int(input("Ingresa el sexto párrafo del numero: "))
g = int(input("Ingresa el septimo párrafo del numero: "))
h = int(input("Ingresa el octavo párrafo del numero: "))
i = int(input("Ingresa el noveno párrafo del numero: "))
j = int(input("Ingresa el decimo párrafo del numero: "))
k = int(input("Ingresa el onceavo párrafo del numero: "))
a, b, c, d, e, f, g, h, i, j = chr(a), chr(b), chr(c), chr(d), chr(e), chr(f), chr(g), chr(h), chr(i), chr(j), chr(k)
print("La palabra o texto en unicode decimal es: ", a, b, c, d, e, f, g, h, i, j, k)
except ValueError:
print("[+] Debes ingresar números enteros correspondientes al cifrado decimal Unicode")
else:
print("Tu palabra tiene demasiadas letras, por favor visita la siguiente página en donde puedes hacer la desconversion de cualquier texto sin ningún problema ----> https://cryptii.com/pipes/text-decimal")
else:
print("Has elegido que no, adiós")
exit()
else:
print("Opción incorrecta,solo puede poner <y> o <n>")
exit()
def exit():
print("\t\nGoodbye :)\n")
sys.exit(1)
# ESPERO QUE LES SIRVA ESTA PEQUEÑA TOOL :)
# SI MODIFICAS ALGO AÑADE MI NOMBRE EN EL SCRIPT XD
| 53.025818 | 243 | 0.551156 | 3,805 | 30,808 | 4.432852 | 0.096978 | 0.042687 | 0.058694 | 0.06652 | 0.83281 | 0.82694 | 0.799194 | 0.795636 | 0.781526 | 0.774352 | 0 | 0.023813 | 0.308913 | 30,808 | 580 | 244 | 53.117241 | 0.768353 | 0.01967 | 0 | 0.624542 | 0 | 0.047619 | 0.407471 | 0.000828 | 0 | 0 | 0 | 0.001724 | 0 | 1 | 0.042125 | false | 0.161172 | 0.020147 | 0.001832 | 0.067766 | 0.205128 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
0efb3bbfd3df3c7fb2a57f02fc9c11f87454ed1f | 8,928 | py | Python | src/ctc/toolbox/search_utils.py | fei-protocol/checkthechain | ec838f3d0d44af228f45394d9ba8d8eb7f677520 | [
"MIT"
] | 94 | 2022-02-15T19:34:49.000Z | 2022-03-26T19:26:22.000Z | src/ctc/toolbox/search_utils.py | fei-protocol/checkthechain | ec838f3d0d44af228f45394d9ba8d8eb7f677520 | [
"MIT"
] | 7 | 2022-03-03T02:58:47.000Z | 2022-03-11T18:41:05.000Z | src/ctc/toolbox/search_utils.py | fei-protocol/checkthechain | ec838f3d0d44af228f45394d9ba8d8eb7f677520 | [
"MIT"
] | 7 | 2022-02-15T17:53:07.000Z | 2022-03-17T19:14:17.000Z | from __future__ import annotations
import typing
class NoMatchFound(LookupError):
pass
class SearchRangeTooLow(NoMatchFound):
pass
class MultipleMatchesFound(LookupError):
pass
M = typing.TypeVar('M', bound=typing.Mapping[typing.Any, typing.Any])
def get_matching_entries(
sequence: typing.Sequence[M],
query: typing.Mapping[typing.Any, typing.Any],
) -> list[M]:
matches: list[M] = []
for item in sequence:
for key, value in query.items():
if item.get(key) != value:
break
else:
matches.append(item)
return matches
@typing.overload
def get_matching_entry(
sequence: typing.Sequence[M],
query: typing.Mapping[typing.Any, typing.Any],
raise_if_not_found: typing.Literal[False],
) -> typing.Optional[M]:
...
@typing.overload
def get_matching_entry(
sequence: typing.Sequence[M],
query: typing.Mapping[typing.Any, typing.Any],
raise_if_not_found: typing.Literal[True] = True,
) -> M:
...
def get_matching_entry(
sequence: typing.Sequence[M],
query: typing.Mapping[typing.Any, typing.Any],
raise_if_not_found: bool = True,
) -> typing.Optional[M]:
matches = get_matching_entries(sequence=sequence, query=query)
if len(matches) == 1:
return matches[0]
elif len(matches) > 1:
raise MultipleMatchesFound('more than one match found in sequence')
else:
if raise_if_not_found:
raise NoMatchFound('no matches found in sequence')
else:
return None
def binary_search(
*,
is_match: typing.Callable[[int], bool],
start_index: int | None = None,
end_index: int | None = None,
index_range: typing.Sequence[int] | None = None,
raise_if_not_found: bool = True,
) -> int | None:
"""return the first index for which match returns True"""
if start_index is None or end_index is None:
if index_range is None:
raise Exception(
'must specify index_range or start_index and end_index'
)
start_index, end_index = index_range
start_index = int(start_index)
end_index = int(end_index)
if is_match(start_index):
return start_index
if not is_match(end_index):
if raise_if_not_found:
raise NoMatchFound('could not find match')
else:
return None
while True:
midpoint = (start_index + end_index) / 2
midpoint = int(midpoint)
if is_match(midpoint):
end_index = midpoint
else:
start_index = midpoint
if start_index + 1 == end_index:
return end_index
async def async_binary_search(
*,
async_is_match: typing.Callable[
[int], typing.Coroutine[typing.Any, typing.Any, bool]
],
start_index: int | None = None,
end_index: int | None = None,
index_range: typing.Sequence[int] | None = None,
raise_if_not_found: bool = True,
) -> int | None:
"""return the first index for which match returns True"""
if start_index is None or end_index is None:
if index_range is None:
raise Exception(
'must specify index_range or start_index and end_index'
)
start_index, end_index = index_range
start_index = int(start_index)
end_index = int(end_index)
if await async_is_match(start_index):
return start_index
if not await async_is_match(end_index):
if raise_if_not_found:
raise NoMatchFound('could not find match')
else:
return None
while True:
midpoint = (start_index + end_index) / 2
midpoint = int(midpoint)
if await async_is_match(midpoint):
end_index = midpoint
else:
start_index = midpoint
if start_index + 1 == end_index:
return end_index
def nary_search(
nary: int,
start_index: int,
end_index: int,
is_match: typing.Callable[[typing.Sequence[int]], typing.Sequence[bool]],
debug: bool = False,
raise_if_not_found: bool = True,
get_next_probes: typing.Callable[..., typing.Sequence[int]] | None = None,
) -> int | None:
if get_next_probes is None:
get_next_probes = get_next_probes_linear
extra_probes = [start_index, end_index]
probe_min = start_index
probe_max = end_index
while True:
# if probe range minimized, return result
if probe_max == probe_min + 1:
return probe_max
# get next probes to test
probes = get_next_probes(
probe_min=probe_min, probe_max=probe_max, nary=nary
)
probes = sorted(set(probes))
n_probes = len(probes)
# add in extra probes for start_index and end_index
all_probes = probes + extra_probes
# compute results
all_results = is_match(all_probes)
results = all_results[:n_probes]
extra_results = all_results[n_probes:]
# separate start_index and end_index probes
if len(extra_probes) > 0:
start_result, end_result = extra_results
if start_result:
return start_index
elif not end_result:
if raise_if_not_found:
raise Exception('search range does not go high enough')
else:
return None
extra_probes = []
# determine lowest successful probe
for p in range(len(probes)):
if results[p]:
break
else:
p += 1
# print state
if debug:
print('probe_min:', probe_min)
print('probe_max:', probe_max)
print('n_probes:', n_probes)
print('probes:', probes)
print('results:', results)
print('p:', p)
print()
# adjust search boundaries
if p == 0:
probe_min = probe_min
probe_max = probes[0]
elif p == len(probes):
probe_min = probes[-1]
probe_max = probe_max
else:
probe_min = probes[p - 1]
probe_max = probes[p]
async def async_nary_search(
nary: int,
start_index: int,
end_index: int,
async_is_match: typing.Callable[
[typing.Sequence[int]],
typing.Coroutine[typing.Any, typing.Any, typing.Sequence[bool]],
],
debug: bool = False,
raise_if_not_found: bool = True,
get_next_probes: typing.Callable[..., typing.Sequence[int]] | None = None,
) -> int | None:
if get_next_probes is None:
get_next_probes = get_next_probes_linear
extra_probes = [start_index, end_index]
probe_min = start_index
probe_max = end_index
while True:
# if probe range minimized, return result
if probe_max == probe_min + 1:
return probe_max
# get next probes to test
probes = get_next_probes(
probe_min=probe_min, probe_max=probe_max, nary=nary
)
probes = sorted(set(probes))
n_probes = len(probes)
# add in extra probes for start_index and end_index
all_probes = probes + extra_probes
# compute results
all_results = await async_is_match(all_probes)
results = all_results[:n_probes]
extra_results = all_results[n_probes:]
# separate start_index and end_index probes
if len(extra_probes) > 0:
start_result, end_result = extra_results
if start_result:
return start_index
elif not end_result:
if raise_if_not_found:
raise SearchRangeTooLow('search range does not go high enough')
else:
return None
extra_probes = []
# determine lowest successful probe
for p in range(len(probes)):
if results[p]:
break
else:
p += 1
# print state
if debug:
print('probe_min:', probe_min)
print('probe_max:', probe_max)
print('n_probes:', n_probes)
print('probes:', probes)
print('results:', results)
print('p:', p)
print()
# adjust search boundaries
if p == 0:
probe_min = probe_min
probe_max = probes[0]
elif p == len(probes):
probe_min = probes[-1]
probe_max = probe_max
else:
probe_min = probes[p - 1]
probe_max = probes[p]
def get_next_probes_linear(
probe_min: int,
probe_max: int,
nary: int,
) -> list[int]:
n_probes = min(nary - 1, probe_max - probe_min - 1)
d = (probe_max - probe_min) / (n_probes + 1)
probes = [probe_min + (p + 1) * d for p in range(n_probes)]
return [round(probe) for probe in probes]
| 27.386503 | 83 | 0.59207 | 1,112 | 8,928 | 4.515288 | 0.104317 | 0.067716 | 0.033659 | 0.035849 | 0.839673 | 0.82135 | 0.813583 | 0.794264 | 0.777933 | 0.762796 | 0 | 0.00412 | 0.320341 | 8,928 | 325 | 84 | 27.470769 | 0.823336 | 0.06026 | 0 | 0.778689 | 0 | 0 | 0.045236 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028689 | false | 0.012295 | 0.008197 | 0 | 0.114754 | 0.057377 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
164296bbe8c9da8f0ea517844e99b945249adfb6 | 29,681 | py | Python | label_studio_withoutsignin/tests/test_endpoints.py | DimaVinnitsa/label-studio | b33ef9edc5efef5f5a073e3a457832278afbf2cf | [
"Apache-2.0"
] | null | null | null | label_studio_withoutsignin/tests/test_endpoints.py | DimaVinnitsa/label-studio | b33ef9edc5efef5f5a073e3a457832278afbf2cf | [
"Apache-2.0"
] | null | null | null | label_studio_withoutsignin/tests/test_endpoints.py | DimaVinnitsa/label-studio | b33ef9edc5efef5f5a073e3a457832278afbf2cf | [
"Apache-2.0"
] | null | null | null | """This file and its contents are licensed under the Apache License 2.0. Please see the included NOTICE for copyright information and LICENSE for a copy of the license.
"""
import pytest
import django
from django.urls import get_resolver
from django.shortcuts import reverse
from tasks.models import Annotation
from tasks.models import Task
owner_statuses = {
"/tasks/1000/label": {"get": 200, "post": 200, "put": 405, "patch": 405, "delete": 405},
"/tasks/1000/delete": {"get": 302, "post": 404, "put": 405, "patch": 405, "delete": 405},
"/tasks/1000/explore": {"get": 200, "post": 200, "put": 405, "patch": 405, "delete": 405},
"/api/tasks/1000/cancel": {"get": 405, "post": 200, "put": 405, "patch": 405, "delete": 405},
"/api/tasks/1000/annotations/": {
"get": 200,
"post": 201,
"put": 405,
"patch": 405,
"delete": 405,
},
"/api/tasks/1000/annotations/1000/": {
"get": 200,
"post": 405,
"put": 200,
"patch": 200,
"delete": 204,
},
"/api/tasks/1000/": {"get": 200, "post": 405, "put": 400, "patch": 400, "delete": 204},
"/api/projects/1000/annotations/": {
"get": 405,
"post": 405,
"put": 405,
"patch": 405,
"delete": 204,
},
"/api/projects/1000/results/": {
"get": 200,
"post": 405,
"put": 405,
"patch": 405,
"delete": 405,
},
"/api/projects/1000/tasks/bulk/": {
"get": 405,
"post": 400,
"put": 405,
"patch": 405,
"delete": 405,
},
"/api/projects/1000/tasks/": {"get": 200, "post": 415, "put": 405, "patch": 405, "delete": 204},
"/annotator/invites/1000": {"get": 403, "post": 403, "put": 403, "patch": 403, "delete": 403},
"/annotator/projects/1000/editor": {
"get": 403,
"post": 403,
"put": 403,
"patch": 403,
"delete": 403,
},
"/annotator/projects/": {"get": 403, "post": 403, "put": 403, "patch": 403, "delete": 403},
"/annotator/account/": {"get": 403, "post": 403, "put": 403, "patch": 403, "delete": 403},
"/annotator/signup/": {"get": 403, "post": 403, "put": 403, "patch": 403, "delete": 403},
"/annotator/login/": {"get": 403, "post": 403, "put": 403, "patch": 403, "delete": 403},
"/logout": {"get": 302, "post": 302, "put": 302, "patch": 302, "delete": 302},
"/api/": {"get": 200, "post": 405, "put": 405, "patch": 405, "delete": 405},
"/api/projects/validate": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/api/projects/template": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/api/projects/1000/backends": {
"get": 401,
"post": 401,
"put": 401,
"patch": 401,
"delete": 401,
},
"/api/projects/1000/backends/connections": {
"get": 401,
"post": 401,
"put": 401,
"patch": 401,
"delete": 401,
},
"/api/projects/backends": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/api/projects/1000/predict": {
"get": 401,
"post": 401,
"put": 401,
"patch": 401,
"delete": 401,
},
"/api/projects/1000/onboarding/1000": {
"get": 401,
"post": 401,
"put": 401,
"patch": 401,
"delete": 401,
},
"/api/projects/1000/next": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/api/projects/1000/expert_instruction": {
"get": 401,
"post": 401,
"put": 401,
"patch": 401,
"delete": 401,
},
"/api/projects/1000/": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/api/projects/": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/upload-example/": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/1000/ml": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/1000/plots": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/1000/experts": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/1000/delete": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/1000/duplicate": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/1000/upload-example/": {
"get": 401,
"post": 401,
"put": 401,
"patch": 401,
"delete": 401,
},
"/projects/1000/data": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/1000/settings/edit-config": {
"get": 401,
"post": 401,
"put": 401,
"patch": 401,
"delete": 401,
},
"/projects/1000/settings": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/1000/": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/render": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/template/": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/create/": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/projects/": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/business/not-approved": {"get": 200, "post": 200, "put": 200, "patch": 200, "delete": 200},
"/business/stats": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/business/experts/list": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/user/account/": {"get": 401, "post": 401, "put": 401, "patch": 401, "delete": 401},
"/user/signup/": {"get": 200, "post": 200, "put": 200, "patch": 200, "delete": 200},
"/user/login/": {"get": 200, "post": 200, "put": 200, "patch": 200, "delete": 200},
"/django-rq/queues/1000/1000/enqueue/": {
"get": 302,
"post": 302,
"put": 302,
"patch": 302,
"delete": 302,
},
"/django-rq/queues/1000/1000/requeue/": {
"get": 302,
"post": 302,
"put": 302,
"patch": 302,
"delete": 302,
},
"/django-rq/queues/actions/1000/": {
"get": 302,
"post": 302,
"put": 302,
"patch": 302,
"delete": 302,
},
"/django-rq/queues/1000/1000/delete/": {
"get": 302,
"post": 302,
"put": 302,
"patch": 302,
"delete": 302,
},
"/django-rq/queues/1000/1000/": {
"get": 302,
"post": 302,
"put": 302,
"patch": 302,
"delete": 302,
},
"/django-rq/queues/1000/requeue-all/": {
"get": 302,
"post": 302,
"put": 302,
"patch": 302,
"delete": 302,
},
"/django-rq/queues/1000/empty/": {
"get": 302,
"post": 302,
"put": 302,
"patch": 302,
"delete": 302,
},
"/django-rq/queues/1000/deferred/": {
"get": 302,
"post": 302,
"put": 302,
"patch": 302,
"delete": 302,
},
"/django-rq/queues/1000/started/": {
"get": 302,
"post": 302,
"put": 302,
"patch": 302,
"delete": 302,
},
"/django-rq/queues/1000/finished/": {
"get": 302,
"post": 302,
"put": 302,
"patch": 302,
"delete": 302,
},
"/django-rq/workers/1000/1000/": {
"get": 302,
"post": 302,
"put": 302,
"patch": 302,
"delete": 302,
},
"/django-rq/workers/1000/": {"get": 302, "post": 302, "put": 302, "patch": 302, "delete": 302},
"/django-rq/queues/1000/": {"get": 302, "post": 302, "put": 302, "patch": 302, "delete": 302},
"/django-rq/stats.json/": {"get": 200, "post": 200, "put": 200, "patch": 200, "delete": 200},
"/django-rq/": {"get": 302, "post": 302, "put": 302, "patch": 302, "delete": 302},
}
other_business_statuses = {
"/tasks/1000/label": {"get": 403, "post": 403, "put": 405, "delete": 405},
"/tasks/1000/delete": {"get": 403, "post": 403, "put": 405, "delete": 405},
"/tasks/1000/explore": {"get": 403, "post": 403, "put": 405, "delete": 405},
"/api/tasks/1000/cancel": {"get": 405, "post": 403, "put": 405, "delete": 405},
"/api/tasks/1000/annotations/": {"get": 403, "post": 403, "put": 405, "delete": 405},
"/api/tasks/1000/annotations/1000/": {"get": 403, "post": 405, "put": 403, "delete": 403},
"/api/tasks/1000/": {"get": 403, "post": 405, "put": 403, "delete": 403},
"/api/projects/1000/tasks/delete": {"get": 405, "post": 405, "put": 405, "delete": 403},
"/api/projects/1000/annotations/delete": {"get": 405, "post": 405, "put": 405, "delete": 403},
"/api/projects/1000/results/": {"get": 403, "post": 405, "put": 405, "delete": 405},
"/api/projects/1000/tasks/bulk/": {"get": 405, "post": 403, "put": 405, "delete": 405},
"/api/projects/1000/tasks/": {"get": 403, "post": 415, "put": 405, "delete": 405},
"/annotator/invites/1000": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/annotator/projects/1000/editor": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/annotator/projects/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/annotator/account/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/annotator/signup/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/annotator/login/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/logout": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/api/": {"get": 200, "post": 405, "put": 405, "delete": 405},
"/api/projects/validate": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/template": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/backends": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/predict": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/onboarding/1000": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/next": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/expert_instruction": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/upload-example/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/ml": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/plots": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/experts": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/delete": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/duplicate": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/upload-example/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/data/upload": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/data": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/settings/edit-config": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/settings": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/render": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/template/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/create/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/business/not-approved": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/business/stats": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/business/experts/list": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/user/account/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/user/signup/": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/user/login/": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/django-rq/queues/1000/1000/enqueue/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/1000/requeue/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/actions/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/1000/delete/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/requeue-all/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/empty/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/deferred/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/started/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/finished/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/workers/1000/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/workers/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/stats.json/": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/django-rq/": {"get": 302, "post": 302, "put": 302, "delete": 302},
}
other_annotator_statuses = {
"/tasks/1000/label": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/tasks/1000/delete": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/tasks/1000/explore": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/api/tasks/1000/cancel": {"get": 405, "post": 403, "put": 405, "delete": 405},
"/api/tasks/1000/annotations/": {"get": 403, "post": 403, "put": 405, "delete": 405},
"/api/tasks/1000/annotations/1000/": {"get": 403, "post": 405, "put": 403, "delete": 403},
"/api/tasks/1000/": {"get": 403, "post": 405, "put": 403, "delete": 403},
"/api/projects/1000/tasks/delete": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/api/projects/1000/annotations/delete": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/api/projects/1000/results/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/api/projects/1000/tasks/bulk/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/api/projects/1000/tasks/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/annotator/invites/1000": {"get": 404, "post": 404, "put": 404, "delete": 404},
"/annotator/projects/1000/editor": {"get": 403, "post": 403, "put": 405, "delete": 405},
"/annotator/projects/": {"get": 200, "post": 200, "put": 405, "delete": 405},
"/annotator/account/": {"get": 200, "post": 302, "put": 405, "delete": 405},
"/annotator/signup/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/annotator/login/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/logout": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/api/": {"get": 200, "post": 405, "put": 405, "delete": 405},
"/api/projects/validate": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/template": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/backends": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/predict": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/onboarding/1000": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/next": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/expert_instruction": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/upload-example/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/ml": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/plots": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/experts": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/delete": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/duplicate": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/upload-example/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/data/upload": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/data": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/settings/edit-config": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/settings": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/render": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/template/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/create/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/business/not-approved": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/business/stats": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/business/experts/list": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/user/account/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/user/signup/": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/user/login/": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/django-rq/queues/1000/1000/enqueue/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/1000/requeue/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/actions/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/1000/delete/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/requeue-all/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/empty/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/deferred/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/started/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/finished/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/workers/1000/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/workers/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/stats.json/": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/django-rq/": {"get": 302, "post": 302, "put": 302, "delete": 302},
}
group_annotator_statuses = {
"/tasks/1000/label": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/tasks/1000/delete": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/tasks/1000/explore": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/api/tasks/1000/cancel": {"get": 405, "post": 403, "put": 405, "delete": 405},
"/api/tasks/1000/annotations/": {"get": 403, "post": 403, "put": 405, "delete": 405},
"/api/tasks/1000/annotations/1000/": {"get": 403, "post": 405, "put": 403, "delete": 403},
"/api/tasks/1000/": {"get": 403, "post": 405, "put": 403, "delete": 403},
"/api/projects/1000/tasks/delete": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/api/projects/1000/annotations/delete": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/api/projects/1000/results/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/api/projects/1000/tasks/bulk/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/api/projects/1000/tasks/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/annotator/invites/1000": {"get": 404, "post": 404, "put": 404, "delete": 404},
"/annotator/projects/1000/editor": {"get": 403, "post": 403, "put": 405, "delete": 405},
"/annotator/projects/": {"get": 200, "post": 200, "put": 405, "delete": 405},
"/annotator/account/": {"get": 200, "post": 302, "put": 405, "delete": 405},
"/annotator/signup/": {"get": 403, "post": 403, "put": 403, "delete": 403},
"/annotator/login/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/logout": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/api/": {"get": 200, "post": 405, "put": 405, "delete": 405},
"/api/projects/validate": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/template": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/backends": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/predict": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/onboarding/1000": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/next": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/expert_instruction": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/api/projects/1000/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/upload-example/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/ml": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/plots": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/experts": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/delete": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/duplicate": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/upload-example/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/data/upload": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/data": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/settings/edit-config": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/settings": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/1000/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/render": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/template/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/create/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/projects/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/business/not-approved": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/business/stats": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/business/experts/list": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/user/account/": {"get": 401, "post": 401, "put": 401, "delete": 401},
"/user/signup/": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/user/login/": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/django-rq/queues/1000/1000/enqueue/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/1000/requeue/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/actions/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/1000/delete/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/requeue-all/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/empty/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/deferred/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/started/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/finished/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/workers/1000/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/workers/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/queues/1000/": {"get": 302, "post": 302, "put": 302, "delete": 302},
"/django-rq/stats.json/": {"get": 200, "post": 200, "put": 200, "delete": 200},
"/django-rq/": {"get": 302, "post": 302, "put": 302, "delete": 302},
}
def build_urls(project_id, task_id, annotation_id):
"""Get all the ulrs from django"""
urls = []
exclude_urls = {"schema-json", "schema-swagger-ui", "schema-redoc"}
resolver = get_resolver(None).reverse_dict
for url_name in resolver:
if isinstance(url_name, str) and url_name not in exclude_urls:
keys = resolver[url_name][0][0][1]
kwargs = {}
for key in keys:
if "pk" in key:
kwargs[key] = 1000 # for example user_pk or project_pk will be 1000
if key in ["pk", "step_pk", "job_id", "queue_index"]:
kwargs[key] = 1000
elif key in ["token", "uidb64"]:
kwargs[key] = 1000
elif key in ["key"]:
kwargs[key] = "1000"
# we need to use really existing project/task/annotation ids from fixture
if key == "project_id" or key == "project_pk":
kwargs[key] = project_id
elif key == "task_id":
kwargs[key] = task_id
elif key == "annotation_id":
kwargs[key] = annotation_id
elif "id" in key:
kwargs[key] = 1
if url_name == "password_reset_confirm":
kwargs["token"] = "1000-1000"
kwargs["uidb64"] = "1000"
try:
url = reverse(url_name, kwargs=kwargs)
except django.urls.exceptions.NoReverseMatch as e:
print(
f'\n\n ---> Could not find "{url_name}" with django reverse and kwargs "{kwargs}".\n'
f"Probably some kwarg is absent\n\n"
)
raise e
exclude = ["/password-reset/complete/", "/password-reset/"]
add = True
for exc in exclude:
if url.startswith(exc):
add = False
if add:
urls.append(url)
return urls
def restore_objects(project):
"""Create task and annotation for URL tests"""
# task_db, annotation_db = None, None
if project.pk != 1000:
project.pk = 1000
project.title += "2"
project.save()
try:
task_db = Task.objects.get(pk=1000)
except Task.DoesNotExist:
task_db = Task()
task_db.data = {"data": {"image": "kittens.jpg"}}
task_db.project = project
task_db.id = 1000 # we need to use id 1000 to avoid db last start
task_db.save()
try:
annotation_db = Annotation.objects.get(pk=1000)
except Annotation.DoesNotExist:
task_db = Task.objects.get(pk=1000)
annotation_db = Annotation()
annotation = [
{"from_name": "some", "to_name": "x", "type": "none", "value": {"none": ["Opossum"]}}
]
annotation_db.result = annotation
annotation_db.id = 1000 # we need to use id 1000 to avoid db last start
annotation_db.task = task_db
annotation_db.save()
return task_db, annotation_db
def check_urls(urls, runner, match_statuses, project):
statuses = {}
for url in urls:
print("-->", url)
status = {}
restore_objects(project)
r = runner.get(url)
status["get"] = r.status_code
r = runner.post(url)
status["post"] = r.status_code
r = runner.put(url)
status["put"] = r.status_code
r = runner.patch(url)
status["patch"] = r.status_code
r = runner.delete(url)
status["delete"] = r.status_code
# assert url in match_statuses, '\nNew URL found, please check statuses and add \n\n' \
# + url + ': ' + str(status) + \
# '\n\nto dict \n\n' + runner.statuses_name + '\n'
statuses[url] = status
# assert match_statuses[url] == status, f'Expected statuses mismatch: "{url}"'
# print(statuses) # use this to collect urls -> statuses dict
def run(owner, runner):
"""Get all urls from Django and GET/POST/PUT/DELETE them"""
owner.task_db, owner.annotation_db = restore_objects(owner.project)
urls = build_urls(owner.project.id, owner.task_db.id, owner.annotation_db.id)
check_urls(urls, runner, runner.statuses, owner.project)
@pytest.mark.django_db
def test_all_urls_owner(setup_project_choices):
runner = owner = setup_project_choices
runner.statuses = owner_statuses
runner.statuses_name = "owner_statuses"
run(owner, runner)
@pytest.mark.django_db
def test_all_urls_other_business(setup_project_choices, business_client):
business_client.statuses = other_business_statuses
business_client.statuses_name = "other_business_statuses"
run(setup_project_choices, business_client)
| 51.799302 | 168 | 0.539032 | 3,791 | 29,681 | 4.191506 | 0.057241 | 0.041536 | 0.069226 | 0.089994 | 0.841787 | 0.817999 | 0.811139 | 0.80472 | 0.794588 | 0.787854 | 0 | 0.180469 | 0.211987 | 29,681 | 572 | 169 | 51.88986 | 0.49891 | 0.030255 | 0 | 0.581574 | 0 | 0 | 0.385065 | 0.162499 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011516 | false | 0.003839 | 0.011516 | 0 | 0.026871 | 0.003839 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
16490d2e7b93f90a58137972e2a03b8f8bc3adfe | 33 | py | Python | docsconfig/test.py | design-automation/mobius-sim-py | b4a2b6e097cb39a50dfe998829b5f016f07ca49f | [
"MIT"
] | null | null | null | docsconfig/test.py | design-automation/mobius-sim-py | b4a2b6e097cb39a50dfe998829b5f016f07ca49f | [
"MIT"
] | null | null | null | docsconfig/test.py | design-automation/mobius-sim-py | b4a2b6e097cb39a50dfe998829b5f016f07ca49f | [
"MIT"
] | null | null | null | import sim_model
print(sim_model) | 16.5 | 16 | 0.878788 | 6 | 33 | 4.5 | 0.666667 | 0.592593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 33 | 2 | 17 | 16.5 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 7 |
166139bb6fc9363506f697d9e884ce62dffadac7 | 110,386 | py | Python | src/test/igmp/igmpTest.py | huseyinbolt/cord-tester | ed9b79916e6326a45bfaf3227b8ff922d76df4f1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/test/igmp/igmpTest.py | huseyinbolt/cord-tester | ed9b79916e6326a45bfaf3227b8ff922d76df4f1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/test/igmp/igmpTest.py | huseyinbolt/cord-tester | ed9b79916e6326a45bfaf3227b8ff922d76df4f1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null |
# Copyright 2017-present Open Networking Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Copyright 2016-present Ciena Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from twisted.internet import defer
from nose.tools import *
from nose.twistedtools import reactor, deferred
from scapy.all import *
from select import select as socket_select
import time, monotonic
import os
import random
import threading
from IGMP import *
from McastTraffic import *
from Stats import Stats
from OnosCtrl import OnosCtrl
from OltConfig import OltConfig
from Channels import IgmpChannel
from CordLogger import CordLogger
from CordTestConfig import setup_module, teardown_module
from CordTestUtils import log_test
log_test.setLevel('INFO')
class IGMPTestState:
def __init__(self, groups = [], df = None, state = 0):
self.df = df
self.state = state
self.counter = 0
self.groups = groups
self.group_map = {} ##create a send/recv count map
for g in groups:
self.group_map[g] = (Stats(), Stats())
def update(self, group, tx = 0, rx = 0, t = 0):
self.counter += 1
index = 0 if rx == 0 else 1
v = tx if rx == 0 else rx
if self.group_map.has_key(group):
self.group_map[group][index].update(packets = v, t = t)
def update_state(self):
self.state = self.state ^ 1
class igmp_exchange(CordLogger):
V_INF1 = 'veth0'
MGROUP1 = '239.1.2.3'
MGROUP2 = '239.2.2.3'
MINVALIDGROUP1 = '255.255.255.255'
MINVALIDGROUP2 = '239.255.255.255'
MMACGROUP1 = "01:00:5e:01:02:03"
MMACGROUP2 = "01:00:5e:02:02:03"
IGMP_DST_MAC = "01:00:5e:00:00:16"
IGMP_SRC_MAC = "5a:e1:ac:ec:4d:a1"
IP_SRC = '1.2.3.4'
IP_DST = '224.0.0.22'
NEGATIVE_TRAFFIC_STATUS = 1
igmp_eth = Ether(dst = IGMP_DST_MAC, type = ETH_P_IP)
igmp_ip = IP(dst = IP_DST)
IGMP_TEST_TIMEOUT = 5
IGMP_QUERY_TIMEOUT = 60
MCAST_TRAFFIC_TIMEOUT = 20
PORT_TX_DEFAULT = 2
PORT_RX_DEFAULT = 1
max_packets = 100
app = 'org.opencord.igmp'
olt_conf_file = os.getenv('OLT_CONFIG_FILE', os.path.join(os.path.dirname(os.path.realpath(__file__)), '../setup/olt_config.json'))
ROVER_TEST_TIMEOUT = 300 #3600*86
ROVER_TIMEOUT = (ROVER_TEST_TIMEOUT - 100)
ROVER_JOIN_TIMEOUT = 60
VOLTHA_ENABLED = bool(int(os.getenv('VOLTHA_ENABLED', 0)))
@classmethod
def setUpClass(cls):
cls.olt = OltConfig(olt_conf_file = cls.olt_conf_file)
cls.port_map, _ = cls.olt.olt_port_map()
if cls.VOLTHA_ENABLED is False:
OnosCtrl.config_device_driver()
OnosCtrl.cord_olt_config(cls.olt)
time.sleep(2)
@classmethod
def tearDownClass(cls):
if cls.VOLTHA_ENABLED is False:
OnosCtrl.config_device_driver(driver = 'ovs')
def setUp(self):
''' Activate the igmp app'''
super(igmp_exchange, self).setUp()
self.onos_ctrl = OnosCtrl(self.app)
self.onos_ctrl.activate()
self.igmp_channel = IgmpChannel()
def tearDown(self):
super(igmp_exchange, self).tearDown()
def onos_load_config(self, config):
log_test.info('onos load config is %s'%config)
status, code = OnosCtrl.config(config)
if status is False:
log_test.info('JSON request returned status %d' %code)
assert_equal(status, True)
time.sleep(2)
def onos_ssm_table_load(self, groups, src_list = ['1.2.3.4'],flag = False):
return
ssm_dict = {'apps' : { 'org.opencord.igmp' : { 'ssmTranslate' : [] } } }
ssm_xlate_list = ssm_dict['apps']['org.opencord.igmp']['ssmTranslate']
if flag: #to maintain seperate group-source pair.
for i in range(len(groups)):
d = {}
d['source'] = src_list[i] or '0.0.0.0'
d['group'] = groups[i]
ssm_xlate_list.append(d)
else:
for g in groups:
for s in src_list:
d = {}
d['source'] = s or '0.0.0.0'
d['group'] = g
ssm_xlate_list.append(d)
self.onos_load_config(ssm_dict)
cord_port_map = {}
for g in groups:
cord_port_map[g] = (self.PORT_TX_DEFAULT, self.PORT_RX_DEFAULT)
self.igmp_channel.cord_port_table_load(cord_port_map)
time.sleep(2)
def mcast_ip_range(self,start_ip = '224.0.1.0', end_ip = '224.0.1.100'):
start = list(map(int, start_ip.split(".")))
end = list(map(int, end_ip.split(".")))
temp = start
ip_range = []
ip_range.append(start_ip)
while temp != end:
start[3] += 1
for i in (3, 2, 1):
if temp[i] == 255:
temp[i] = 0
temp[i-1] += 1
ip_range.append(".".join(map(str, temp)))
return ip_range
def random_mcast_ip(self,start_ip = '224.0.1.0', end_ip = '224.0.1.100'):
start = list(map(int, start_ip.split(".")))
end = list(map(int, end_ip.split(".")))
temp = start
ip_range = []
ip_range.append(start_ip)
while temp != end:
start[3] += 1
for i in (3, 2, 1):
if temp[i] == 255:
temp[i] = 0
temp[i-1] += 1
ip_range.append(".".join(map(str, temp)))
return random.choice(ip_range)
def source_ip_range(self,start_ip = '10.10.0.1', end_ip = '10.10.0.100'):
start = list(map(int, start_ip.split(".")))
end = list(map(int, end_ip.split(".")))
temp = start
ip_range = []
ip_range.append(start_ip)
while temp != end:
start[3] += 1
for i in (3, 2, 1):
if temp[i] == 255:
temp[i] = 0
temp[i-1] += 1
ip_range.append(".".join(map(str, temp)))
return ip_range
def randomsourceip(self,start_ip = '10.10.0.1', end_ip = '10.10.0.100'):
start = list(map(int, start_ip.split(".")))
end = list(map(int, end_ip.split(".")))
temp = start
ip_range = []
ip_range.append(start_ip)
while temp != end:
start[3] += 1
for i in (3, 2, 1):
if temp[i] == 255:
temp[i] = 0
temp[i-1] += 1
ip_range.append(".".join(map(str, temp)))
return random.choice(ip_range)
def get_igmp_intf(self):
inst = os.getenv('TEST_INSTANCE', None)
if not inst:
return 'veth0'
inst = int(inst) + 1
if inst >= self.port_map['uplink']:
inst += 1
if self.port_map.has_key(inst):
return self.port_map[inst]
return 'veth0'
def igmp_verify_join(self, igmpStateList):
sendState, recvState = igmpStateList
## check if the send is received for the groups
for g in sendState.groups:
tx_stats = sendState.group_map[g][0]
tx = tx_stats.count
assert_greater(tx, 0)
rx_stats = recvState.group_map[g][1]
rx = rx_stats.count
assert_greater(rx, 0)
log_test.info('Receive stats %s for group %s' %(rx_stats, g))
log_test.info('IGMP test verification success')
def igmp_verify_leave(self, igmpStateList, leave_groups):
sendState, recvState = igmpStateList[0], igmpStateList[1]
## check if the send is received for the groups
for g in sendState.groups:
tx_stats = sendState.group_map[g][0]
rx_stats = recvState.group_map[g][1]
tx = tx_stats.count
rx = rx_stats.count
assert_greater(tx, 0)
if g not in leave_groups:
log_test.info('Received %d packets for group %s' %(rx, g))
for g in leave_groups:
rx = recvState.group_map[g][1].count
assert_equal(rx, 0)
log_test.info('IGMP test verification success')
def mcast_traffic_timer(self):
log_test.info('MCAST traffic timer expiry')
self.mcastTraffic.stopReceives()
def send_mcast_cb(self, send_state):
for g in send_state.groups:
send_state.update(g, tx = 1)
return 0
##Runs in the context of twisted reactor thread
def igmp_recv(self, igmpState):
s = socket_select([self.recv_socket], [], [], 1.0)
if self.recv_socket in s[0]:
p = self.recv_socket.recv()
try:
send_time = float(p.payload.load)
recv_time = monotonic.monotonic()
except:
log_test.info('Unexpected Payload received: %s' %p.payload.load)
return 0
#log_test.info( 'Recv in %.6f secs' %(recv_time - send_time))
igmpState.update(p.dst, rx = 1, t = recv_time - send_time)
return 0
def send_igmp_join(self, groups, src_list = ['1.2.3.4'], record_type=IGMP_V3_GR_TYPE_INCLUDE,
ip_pkt = None, iface = 'veth0', ssm_load = False, delay = 1):
if ssm_load is True:
self.onos_ssm_table_load(groups, src_list)
igmp = IGMPv3(type = IGMP_TYPE_V3_MEMBERSHIP_REPORT, max_resp_code=30,
gaddr=self.IP_DST)
for g in groups:
gr = IGMPv3gr(rtype= record_type, mcaddr=g)
gr.sources = src_list
igmp.grps.append(gr)
if ip_pkt is None:
ip_pkt = self.igmp_eth/self.igmp_ip
pkt = ip_pkt/igmp
IGMPv3.fixup(pkt)
sendp(pkt, iface=iface)
if delay != 0:
time.sleep(delay)
def send_igmp_join_recvQuery(self, groups, rec_queryCount = None, src_list = ['1.2.3.4'], ip_pkt = None, iface = 'veth0', delay = 2):
self.onos_ssm_table_load(groups, src_list)
igmp = IGMPv3(type = IGMP_TYPE_V3_MEMBERSHIP_REPORT, max_resp_code=30,
gaddr=self.IP_DST)
for g in groups:
gr = IGMPv3gr(rtype=IGMP_V3_GR_TYPE_INCLUDE, mcaddr=g)
gr.sources = src_list
gr.sources = src_list
igmp.grps.append(gr)
if ip_pkt is None:
ip_pkt = self.igmp_eth/self.igmp_ip
pkt = ip_pkt/igmp
IGMPv3.fixup(pkt)
if rec_queryCount == None:
log_test.info('Sending IGMP join for group %s and waiting for one query packet and printing the packet' %groups)
resp = srp1(pkt, iface=iface)
else:
log_test.info('Sending IGMP join for group %s and waiting for periodic query packets and printing one packet' %groups)
resp = srp1(pkt, iface=iface)
# resp = srp1(pkt, iface=iface) if rec_queryCount else srp3(pkt, iface=iface)
resp[0].summary()
log_test.info('Sent IGMP join for group %s and received a query packet and printing packet' %groups)
if delay != 0:
time.sleep(delay)
def send_igmp_leave(self, groups, src_list = ['1.2.3.4'], ip_pkt = None, iface = 'veth0', delay = 2):
log_test.info('entering into igmp leave function')
igmp = IGMPv3(type = IGMP_TYPE_V3_MEMBERSHIP_REPORT, max_resp_code=30,
gaddr=self.IP_DST)
for g in groups:
gr = IGMPv3gr(rtype=IGMP_V3_GR_TYPE_EXCLUDE, mcaddr=g)
gr.sources = src_list
igmp.grps.append(gr)
if ip_pkt is None:
ip_pkt = self.igmp_eth/self.igmp_ip
pkt = ip_pkt/igmp
IGMPv3.fixup(pkt)
sendp(pkt, iface = iface)
if delay != 0:
time.sleep(delay)
def send_igmp_leave_listening_group_specific_query(self, groups, src_list = ['1.2.3.4'], ip_pkt = None, iface = 'veth0', delay = 2):
igmp = IGMPv3(type = IGMP_TYPE_V3_MEMBERSHIP_REPORT, max_resp_code=30,
gaddr=self.IP_DST)
for g in groups:
gr = IGMPv3gr(rtype=IGMP_V3_GR_TYPE_EXCLUDE, mcaddr=g)
gr.sources = src_list
igmp.grps.append(gr)
if ip_pkt is None:
ip_pkt = self.igmp_eth/self.igmp_ip
pkt = ip_pkt/igmp
IGMPv3.fixup(pkt)
log_test.info('Sending IGMP leave for group %s and waiting for one group specific query packet and printing the packet' %groups)
resp = srp1(pkt, iface=iface)
resp[0].summary()
log_test.info('Sent IGMP leave for group %s and received a group specific query packet and printing packet' %groups)
if delay != 0:
time.sleep(delay)
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+10)
def test_igmp_join_verify_traffic(self):
groups = [self.MGROUP1, self.MGROUP1]
self.onos_ssm_table_load(groups)
df = defer.Deferred()
igmpState = IGMPTestState(groups = groups, df = df)
igmpStateRecv = IGMPTestState(groups = groups, df = df)
igmpStateList = (igmpState, igmpStateRecv)
tx_intf = self.port_map[self.PORT_TX_DEFAULT]
rx_intf = self.port_map[self.PORT_RX_DEFAULT]
mcastTraffic = McastTraffic(groups, iface= tx_intf, cb = self.send_mcast_cb, arg = igmpState)
self.df = df
self.mcastTraffic = mcastTraffic
self.recv_socket = L3PacketSocket(iface = rx_intf, type = ETH_P_IP)
def igmp_srp_task(stateList):
igmpSendState, igmpRecvState = stateList
if not mcastTraffic.isRecvStopped():
self.igmp_recv(igmpRecvState)
reactor.callLater(0, igmp_srp_task, stateList)
else:
self.mcastTraffic.stop()
#log_test.info('Sending IGMP leave for groups: %s' %groups)
self.send_igmp_leave(groups, iface = rx_intf, delay = 2)
self.recv_socket.close()
self.igmp_verify_join(stateList)
self.df.callback(0)
self.send_igmp_join(groups, iface = rx_intf)
mcastTraffic.start()
self.test_timer = reactor.callLater(self.MCAST_TRAFFIC_TIMEOUT, self.mcast_traffic_timer)
reactor.callLater(0, igmp_srp_task, igmpStateList)
return df
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+40)
def test_igmp_leave_verify_traffic(self):
groups = [self.MGROUP1]
leave_groups = [self.MGROUP1]
self.onos_ssm_table_load(groups)
df = defer.Deferred()
igmpState = IGMPTestState(groups = groups, df = df)
IGMPTestState(groups = groups, df = df)
tx_intf = self.port_map[self.PORT_TX_DEFAULT]
rx_intf = self.port_map[self.PORT_RX_DEFAULT]
mcastTraffic = McastTraffic(groups, iface= tx_intf, cb = self.send_mcast_cb,
arg = igmpState)
self.df = df
self.mcastTraffic = mcastTraffic
self.recv_socket = L3PacketSocket(iface = rx_intf, type = ETH_P_IP)
mcastTraffic.start()
self.send_igmp_join(groups, iface = rx_intf)
time.sleep(5)
self.send_igmp_leave(leave_groups, delay = 3, iface = rx_intf)
time.sleep(10)
join_state = IGMPTestState(groups = leave_groups)
status = self.igmp_not_recv_task(rx_intf, leave_groups, join_state)
log_test.info('verified status for igmp recv task %s'%status)
assert status == 1 , 'EXPECTED RESULT'
self.df.callback(0)
return df
@deferred(timeout=100)
def test_igmp_leave_join_loop(self):
self.groups = ['226.0.1.1', '227.0.0.1', '228.0.0.1', '229.0.0.1', '230.0.0.1' ]
self.src_list = ['3.4.5.6', '7.8.9.10']
self.onos_ssm_table_load(self.groups,src_list=self.src_list)
df = defer.Deferred()
self.df = df
self.iterations = 0
self.num_groups = len(self.groups)
self.MAX_TEST_ITERATIONS = 10
rx_intf = self.port_map[self.PORT_RX_DEFAULT]
def igmp_srp_task(v):
if self.iterations < self.MAX_TEST_ITERATIONS:
if v == 1:
##join test
self.num_groups = random.randint(0, len(self.groups))
self.send_igmp_join(self.groups[:self.num_groups],
src_list = self.src_list,
iface = rx_intf, delay = 0)
else:
self.send_igmp_leave(self.groups[:self.num_groups],
src_list = self.src_list,
iface = rx_intf, delay = 0)
self.iterations += 1
v ^= 1
reactor.callLater(1.0 + 0.5*self.num_groups,
igmp_srp_task, v)
else:
self.df.callback(0)
reactor.callLater(0, igmp_srp_task, 1)
return df
def igmp_join_task(self, intf, groups, state, src_list = ['1.2.3.4']):
#self.onos_ssm_table_load(groups, src_list)
igmp = IGMPv3(type = IGMP_TYPE_V3_MEMBERSHIP_REPORT, max_resp_code=30,
gaddr=self.IP_DST)
for g in groups:
gr = IGMPv3gr(rtype = IGMP_V3_GR_TYPE_INCLUDE, mcaddr = g)
gr.sources = src_list
igmp.grps.append(gr)
for g in groups:
state.group_map[g][0].update(1, t = monotonic.monotonic())
pkt = self.igmp_eth/self.igmp_ip/igmp
IGMPv3.fixup(pkt)
sendp(pkt, iface=intf)
log_test.debug('Returning from join task')
def igmp_recv_task(self, intf, groups, join_state):
recv_socket = L3PacketSocket(iface = intf, type = ETH_P_IP)
group_map = {}
for g in groups:
group_map[g] = [0,0]
log_test.info('Verifying join interface should receive multicast data')
while True:
p = recv_socket.recv()
if p.dst in groups and group_map[p.dst][0] == 0:
group_map[p.dst][0] += 1
group_map[p.dst][1] = monotonic.monotonic()
c = 0
for g in groups:
c += group_map[g][0]
if c == len(groups):
break
for g in groups:
join_start = join_state.group_map[g][0].start
recv_time = group_map[g][1] * 1000000
delta = (recv_time - join_start)
log_test.info('Join for group %s received in %.3f usecs' %
(g, delta))
recv_socket.close()
log_test.debug('Returning from recv task')
def igmp_not_recv_task(self, intf, groups, join_state):
log_test.info('Entering igmp not recv task loop')
recv_socket = L2Socket(iface = intf, type = ETH_P_IP)
group_map = {}
for g in groups:
group_map[g] = [0,0]
log_test.info('Verifying join interface, should not receive any multicast data')
self.NEGATIVE_TRAFFIC_STATUS = 1
def igmp_recv_cb(pkt):
log_test.info('Multicast packet %s received for left groups %s' %(pkt[IP].dst, groups))
self.NEGATIVE_TRAFFIC_STATUS = 2
sniff(prn = igmp_recv_cb, count = 1, lfilter = lambda p: IP in p and p[IP].dst in groups,
timeout = 3, opened_socket = recv_socket)
recv_socket.close()
return self.NEGATIVE_TRAFFIC_STATUS
def group_latency_check(self, groups):
tasks = []
self.send_igmp_leave(groups = groups)
join_state = IGMPTestState(groups = groups)
tasks.append(threading.Thread(target=self.igmp_join_task, args = ('veth0', groups, join_state,)))
traffic_state = IGMPTestState(groups = groups)
mcast_traffic = McastTraffic(groups, iface= 'veth2', cb = self.send_mcast_cb,
arg = traffic_state)
mcast_traffic.start()
tasks.append(threading.Thread(target=self.igmp_recv_task, args = ('veth0', groups, join_state)))
for t in tasks:
t.start()
for t in tasks:
t.join()
mcast_traffic.stop()
self.send_igmp_leave(groups = groups)
return
@deferred(timeout=IGMP_QUERY_TIMEOUT + 10)
def test_igmp_1group_join_latency(self):
groups = ['239.0.1.1']
df = defer.Deferred()
def igmp_1group_join_latency():
self.group_latency_check(groups)
df.callback(0)
reactor.callLater(0, igmp_1group_join_latency)
return df
@deferred(timeout=IGMP_QUERY_TIMEOUT + 10)
def test_igmp_2group_join_latency(self):
groups = [self.MGROUP1, self.MGROUP1]
df = defer.Deferred()
def igmp_2group_join_latency():
self.group_latency_check(groups)
df.callback(0)
reactor.callLater(0, igmp_2group_join_latency)
return df
@deferred(timeout=IGMP_QUERY_TIMEOUT + 10)
def test_igmp_Ngroup_join_latency(self):
groups = ['239.0.1.1', '240.0.1.1', '241.0.1.1', '242.0.1.1']
df = defer.Deferred()
def igmp_Ngroup_join_latency():
self.group_latency_check(groups)
df.callback(0)
reactor.callLater(0, igmp_Ngroup_join_latency)
return df
def test_igmp_join_rover_all(self):
s = (224 << 24) | 1
#e = (225 << 24) | (255 << 16) | (255 << 16) | 255
e = (224 << 24) | 10
for i in xrange(s, e+1):
if i&0xff:
ip = '%d.%d.%d.%d'%((i>>24)&0xff, (i>>16)&0xff, (i>>8)&0xff, i&0xff)
self.send_igmp_join([ip], delay = 0)
@deferred(timeout=ROVER_TEST_TIMEOUT)
def test_igmp_join_rover(self):
df = defer.Deferred()
iface = self.get_igmp_intf()
self.df = df
self.count = 0
self.timeout = 0
self.complete = False
def igmp_join_timer():
self.timeout += self.ROVER_JOIN_TIMEOUT
log_test.info('IGMP joins sent: %d' %self.count)
if self.timeout >= self.ROVER_TIMEOUT:
self.complete = True
reactor.callLater(self.ROVER_JOIN_TIMEOUT, igmp_join_timer)
reactor.callLater(self.ROVER_JOIN_TIMEOUT, igmp_join_timer)
self.start_channel = (224 << 24) | 1
self.end_channel = (224 << 24) | 200 #(225 << 24) | (255 << 16) | (255 << 16) | 255
self.current_channel = self.start_channel
def igmp_join_rover(self):
#e = (224 << 24) | 10
chan = self.current_channel
self.current_channel += 1
if self.current_channel >= self.end_channel:
chan = self.current_channel = self.start_channel
if chan&0xff:
ip = '%d.%d.%d.%d'%((chan>>24)&0xff, (chan>>16)&0xff, (chan>>8)&0xff, chan&0xff)
self.send_igmp_join([ip], delay = 0, ssm_load = False, iface = iface)
self.count += 1
if self.complete == True:
log_test.info('%d IGMP joins sent in %d seconds over %s' %(self.count, self.timeout, iface))
self.df.callback(0)
else:
reactor.callLater(0, igmp_join_rover, self)
reactor.callLater(0, igmp_join_rover, self)
return df
@deferred(timeout=IGMP_QUERY_TIMEOUT + 10)
def test_igmp_query(self):
groups = ['224.0.0.1'] ##igmp query group
self.onos_ssm_table_load(groups)
df = defer.Deferred()
self.df = df
self.recv_socket = L2Socket(iface = 'veth0', type = ETH_P_IP)
def igmp_query_timeout():
def igmp_query_cb(pkt):
log_test.info('received igmp query packet is %s'%pkt.show())
log_test.info('Got IGMP query packet from %s for %s' %(pkt[IP].src, pkt[IP].dst))
assert_equal(pkt[IP].dst, '224.0.0.1')
sniff(prn = igmp_query_cb, count=1, lfilter = lambda p: IP in p and p[IP].dst in groups,
opened_socket = self.recv_socket)
self.recv_socket.close()
self.df.callback(0)
#self.send_igmp_join(groups)
self.test_timer = reactor.callLater(self.IGMP_QUERY_TIMEOUT, igmp_query_timeout)
return df
def igmp_send_joins_different_groups_srclist(self, groups, sources, intf = V_INF1, delay = 2, ip_src = None):
g1 = groups[0]
g2 = groups[1]
sourcelist1 = sources[0]
sourcelist2 = sources[1]
eth = Ether(dst = self.IGMP_DST_MAC,type = ETH_P_IP)
ip = IP(dst = self.IP_DST)
log_test.info('Sending join message for the group %s' %g1)
self.send_igmp_join((g1,), src_list = sourcelist1, ip_pkt = eth/ip, iface = intf, delay = 2)
eth = Ether(dst = self.MMACGROUP2, src = self.IGMP_SRC_MAC, type = ETH_P_IP)
ip = IP(dst = g2)
log_test.info('Sending join message for group %s' %g2)
self.send_igmp_join((g2,), src_list = sourcelist2, ip_pkt = eth/ip, iface = intf, delay = 2)
log_test.info('Done with igmp_send_joins_different_groups_srclist')
def igmp_send_joins_different_groups_srclist_wait_query_packets(self, groups, sources, intf = V_INF1, delay = 2, ip_src = None, query_group1 = None, query_group2 = None):
g1 = groups[0]
g2 = groups[1]
sourcelist1 = sources[0]
sourcelist2 = sources[1]
eth = Ether(dst = self.MMACGROUP1, src = self.IGMP_SRC_MAC, type = ETH_P_IP)
src_ip = ip_src or self.IP_SRC
ip = IP(dst = g1, src = src_ip)
if query_group1 is 'group1':
log_test.info('Sending join message for the group %s and waiting for a query packet on join interface' %g1)
self.send_igmp_join_recvQuery((g1,), None, src_list = sourcelist1, ip_pkt = eth/ip, iface = intf, delay = 2)
else:
log_test.info('Sending join message for the group %s' %g1)
self.send_igmp_join((g1,), src_list = sourcelist1, ip_pkt = eth/ip, iface = intf, delay = 2)
eth = Ether(dst = self.MMACGROUP2, src = self.IGMP_SRC_MAC, type = ETH_P_IP)
ip = IP(dst = g2, src = src_ip)
if query_group2 is 'group2':
log_test.info('Sending join message for the group %s and waiting for a query packet on join interface' %g2)
self.send_igmp_join_recvQuery((g2,), None, src_list = sourcelist2, ip_pkt = eth/ip, iface = intf, delay = 2)
else:
log_test.info('Sending join message for group %s' %g2)
self.send_igmp_join((g2,), src_list = sourcelist2, ip_pkt = eth/ip, iface = intf, delay = 2)
def igmp_joins_leave(self,groups,src_list,again_join = False, df = None):
groups1 = [groups[0]]
groups2 = [groups[1]]
src1 = [src_list[0]]
src2 = [src_list[1]]
self.igmp_send_joins_different_groups_srclist(groups1 + groups2,
(src1, src2), intf = self.V_INF1, delay = 2)
src_ip = src1[0]
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
igmpState2 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups2, df = df)
dst_mac = self.iptomac(groups1[0])
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb,
arg = igmpState1)
src_ip = src2[0]
dst_mac = self.iptomac(groups1[0])
mcastTraffic2 = McastTraffic(groups2, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb,
arg = igmpState2)
mcastTraffic1.start()
mcastTraffic2.start()
join_state1 = IGMPTestState(groups = groups1)
join_state2 = IGMPTestState(groups = groups2)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
log_test.info('Interface is receiving multicast groups %s' %groups1)
self.igmp_recv_task(self.V_INF1, groups2, join_state2)
log_test.info('Interface is receiving multicast groups %s' %groups2)
log_test.info('Interface is sending leave message for groups %s now' %groups2)
self.send_igmp_leave(groups = groups2, src_list = src2, iface = self.V_INF1, delay = 2)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
target4 = self.igmp_not_recv_task(self.V_INF1, groups2, join_state2)
assert target4 == 1, 'EXPECTED FAILURE'
if again_join:
dst_mac = '01:00:5e:02:02:03'
ip_dst = '239.2.2.3'
eth = Ether(dst = dst_mac, type = ETH_P_IP)
ip = IP(dst = ip_dst)
log_test.info('Interface sending join message again for the groups %s' %groups2)
self.send_igmp_join(groups2, src_list = [src_ip], ip_pkt = eth/ip, iface = self.V_INF1, delay = 2)
self.igmp_recv_task(self.V_INF1, groups2, join_state2)
log_test.info('Interface is receiving multicast groups %s again' %groups2)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
log_test.info('Interface is still receiving from multicast groups %s' %groups1)
else:
log_test.info('Ended test case')
mcastTraffic1.stop()
mcastTraffic2.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+20)
def test_igmp_2joins_1leave(self):
df = defer.Deferred()
def igmp_2joins_1leave():
groups = ['234.2.3.4','236.8.7.9']
src_list = ['2.3.4.5','5.4.3.2']
self.onos_ssm_table_load(groups,src_list = src_list)
self.igmp_joins_leave(groups,src_list,again_join = False, df = df)
df.callback(0)
reactor.callLater(0, igmp_2joins_1leave)
return df
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+25)
def test_igmp_2joins_1leave_and_join_again(self):
df = defer.Deferred()
def igmp_2joins_1leave_join_again():
groups = ['234.2.3.4','236.8.7.9']
src_list = ['2.3.4.5','5.4.3.2']
self.onos_ssm_table_load(groups,src_list = src_list)
self.igmp_joins_leave(groups,src_list,again_join = True, df = df)
df.callback(0)
reactor.callLater(0, igmp_2joins_1leave_join_again)
return df
def igmp_not_in_src_list(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
self.onos_ssm_table_load(groups1 + groups2,src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4','2.2.2.2', '5.5.5.5'])
self.igmp_send_joins_different_groups_srclist(groups1 + groups2,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '5.5.5.5']),
intf = self.V_INF1, delay = 2)
src_ip = '6.6.6.6'
dst_mac = self.iptomac(groups1[0])
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface = 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
log_test.info('Interface should not receive from multicast groups %s from an interface, which is expected' %groups1)
target1 = self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 2, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s, working as expected' %groups1)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+20)
def test_igmp_not_in_src_list(self):
df = defer.Deferred()
def igmp_not_in_src_list():
self.igmp_not_in_src_list(df = df)
df.callback(0)
reactor.callLater(0, igmp_not_in_src_list)
return df
def igmp_change_to_exclude_src_list(self, df = None):
groups1 = [self.random_mcast_ip()]
groups2 = [self.random_mcast_ip()]
self.onos_ssm_table_load(groups1 + groups2,src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4','2.2.2.2', '5.5.5.5'])
self.igmp_send_joins_different_groups_srclist(groups1 + groups2,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '5.5.5.5']),
intf = self.V_INF1, delay = 2)
src_ip = '2.2.2.2'
dst_mac=self.iptomac(groups1[0])
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
self.send_igmp_leave(groups = groups1, src_list = ['2.2.2.2'], iface = self.V_INF1, delay =2)
target2 = self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target2 == 2, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s after sending CHANGE_TO_EXCLUDE' %groups1)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+10)
def test_igmp_change_to_exclude_src_list(self):
df = defer.Deferred()
def igmp_change_to_exclude_src_list():
self.igmp_change_to_exclude_src_list(df = df)
df.callback(0)
reactor.callLater(0, igmp_change_to_exclude_src_list)
return df
def igmp_include_to_allow_src_list(self, df = None):
groups1 = [self.random_mcast_ip()] #(self.MGROUP1,)
self.onos_ssm_table_load(groups1,src_list = ['4.4.4.4','6.6.6.6'])
self.send_igmp_join(groups = groups1, src_list = ['4.4.4.4'],record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1)
src_ip = '4.4.4.4'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2',src_ip = src_ip,
cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
mcastTraffic1.stop()
mcastTraffic2 = McastTraffic(groups1, iface= 'veth2',src_ip = '6.6.6.6',
cb = self.send_mcast_cb, arg = igmpState1)
self.send_igmp_join(groups = groups1, src_list = ['6.6.6.6'],record_type = IGMP_V3_GR_TYPE_ALLOW_NEW,
iface = self.V_INF1)
mcastTraffic2.start()
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
mcastTraffic2.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+30)
def test_igmp_include_to_allow_src_list(self):
df = defer.Deferred()
def igmp_include_to_allow_src_list():
self.igmp_include_to_allow_src_list(df = df)
df.callback(0)
reactor.callLater(0, igmp_include_to_allow_src_list)
return df
def igmp_include_to_block_src_list(self, df = None):
groups1 = [self.random_mcast_ip()] #groups1 = (self.MGROUP1,)
self.onos_ssm_table_load(groups1,src_list = ['4.4.4.4','6.6.6.6'])
self.send_igmp_join(groups = groups1, src_list = ['4.4.4.4','6.6.6.6'],record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1)
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2',src_ip = '6.6.6.6',
cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
mcastTraffic1.stop()
self.send_igmp_join(groups = groups1, src_list = ['6.6.6.6'],record_type = IGMP_V3_GR_TYPE_BLOCK_OLD,
iface = self.V_INF1)
mcastTraffic2 = McastTraffic(groups1, iface= 'veth2',src_ip = '6.6.6.6',
cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic2.start()
target1 = self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is still receiving traffic from old multicast group %s even after we send block for source list' %groups1)
mcastTraffic2.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+30)
def test_igmp_include_to_block_src_list(self):
df = defer.Deferred()
def igmp_include_to_block_src_list():
self.igmp_include_to_block_src_list(df = df)
df.callback(0)
reactor.callLater(0, igmp_include_to_block_src_list)
return df
def igmp_change_to_include_src_list(self, df = None):
groups1 = [self.random_mcast_ip()]
src_list = ['4.4.4.4','6.6.6.6']
self.onos_ssm_table_load(groups1,src_list = src_list)
self.send_igmp_leave(groups = groups1, src_list = src_list,
iface = self.V_INF1, delay = 2)
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2',src_ip = src_list[0],
cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
target1= self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
mcastTraffic1.stop()
self.send_igmp_join(groups = groups1, src_list = src_list,record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1)
mcastTraffic2 = McastTraffic(groups1, iface= 'veth2',src_ip = src_list[1],
cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic2.start()
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
mcastTraffic2.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+10)
def test_igmp_change_to_include_src_list(self):
df = defer.Deferred()
def igmp_change_to_include_src_list():
self.igmp_change_to_include_src_list(df = df)
df.callback(0)
reactor.callLater(0, igmp_change_to_include_src_list)
return df
#this test case failing because group in include receiving multicast traffic from any of the source
def igmp_exclude_to_allow_src_list(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
self.onos_ssm_table_load(groups1+groups2,src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4','6.6.6.6', '7.7.7.7', '8.8.8.8','5.5.5.5'])
self.send_igmp_leave(groups = groups1, src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4'],
iface = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:01:02:03'
src_ip = '2.2.2.2'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
target1= self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
self.igmp_send_joins_different_groups_srclist(groups1 + groups2,
(['6.6.6.6', '7.7.7.7', '8.8.8.8'], ['6.6.6.6', '5.5.5.5']),
intf = self.V_INF1, delay = 2)
target1= self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+10)
def test_igmp_exclude_to_allow_src_list(self):
df = defer.Deferred()
def igmp_exclude_to_allow_src_list():
self.igmp_exclude_to_allow_src_list(df = df)
df.callback(0)
reactor.callLater(0, igmp_exclude_to_allow_src_list)
return df
def igmp_exclude_to_block_src_list(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
self.onos_ssm_table_load(groups1+groups2,src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4','7.7.7.7','5.5.5.5'])
self.send_igmp_leave(groups = groups1, src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4'],
iface = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:01:02:03'
src_ip = '2.2.2.2'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
target1= self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
self.send_igmp_leave(groups = groups1, src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4', '5.5.5.5', '7.7.7.7'],
iface = self.V_INF1, delay = 2)
target1= self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+10)
def test_igmp_exclude_to_block_src_list(self):
df = defer.Deferred()
def igmp_exclude_to_block_src_list():
self.igmp_exclude_to_block_src_list(df = df)
df.callback(0)
reactor.callLater(0, igmp_exclude_to_block_src_list)
return df
#this test case failing because group in include mode recieves traffic from other sources also.
def igmp_new_src_list(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
self.onos_ssm_table_load(groups1+groups2,src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4','5.5.5.5','6.6.6.6'])
self.igmp_send_joins_different_groups_srclist(groups1+groups2,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '5.5.5.5']),
intf = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:01:02:03'
src_ip = '6.6.6.6'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
target1 = self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
self.igmp_send_joins_different_groups_srclist(groups1 + groups2,
(['2.2.2.2', '6.6.6.6', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '5.5.5.5']),
intf = self.V_INF1, delay = 2)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
log_test.info('Interface is receiving traffic from multicast groups %s after sending join with new source list' %groups1)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+10)
def test_igmp_new_src_list(self):
df = defer.Deferred()
def igmp_new_src_list():
self.igmp_new_src_list(df = df)
df.callback(0)
reactor.callLater(0, igmp_new_src_list)
return df
def igmp_block_old_src_list(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
groups = groups1 + groups2
self.onos_ssm_table_load(groups1+groups2,src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4','5.5.5.5','6.6.6.6','7.7.7.7'])
self.igmp_send_joins_different_groups_srclist(groups,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '5.5.5.5']),
intf = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:02:02:03'
src_ip = '5.5.5.5'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups2, df = df)
mcastTraffic1 = McastTraffic(groups2, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups2)
self.igmp_recv_task(self.V_INF1, groups2, join_state1)
log_test.info('Interface is receiving traffic from multicast groups %s' %groups2)
self.igmp_send_joins_different_groups_srclist(groups,
(['6.6.6.6', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '7.7.7.7']),
intf = self.V_INF1, delay = 2)
target2 = self.igmp_not_recv_task(self.V_INF1, groups2, join_state1)
assert target2 == 2, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s after sending join with block old source list' %groups2)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+20)
def test_igmp_block_old_src_list(self):
df = defer.Deferred()
def igmp_block_old_src_list():
self.igmp_block_old_src_list(df = df)
df.callback(0)
reactor.callLater(0, igmp_block_old_src_list)
return df
def igmp_include_empty_src_list(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
groups = groups1 + groups2
self.igmp_send_joins_different_groups_srclist(groups,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['0']),
intf = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:02:02:03'
src_ip = '5.5.5.5'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups2, df = df)
mcastTraffic1 = McastTraffic(groups2, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups2)
target1 = self.igmp_not_recv_task(self.V_INF1, groups2, join_state1)
assert target1==1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s when we sent join with source list is empty' %groups2)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+20)
def test_igmp_include_empty_src_list(self):
## '''Disabling this test as scapy IGMP doesn't work with empty source lists'''
df = defer.Deferred()
def igmp_include_empty_src_list():
self.igmp_include_empty_src_list(df = df)
df.callback(0)
reactor.callLater(0, igmp_include_empty_src_list)
return df
def igmp_exclude_empty_src_list(self, df = None):
groups2 = (self.MGROUP2,)
self.send_igmp_leave(groups = groups2, src_list = ['0'], iface = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:02:02:03'
src_ip = '5.5.5.5'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups2, df = df)
mcastTraffic1 = McastTraffic(groups2, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups2)
self.igmp_recv_task(self.V_INF1, groups2, join_state1)
log_test.info('Interface is receiving multicast groups %s' %groups2)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+20)
def test_igmp_exclude_empty_src_list(self):
df = defer.Deferred()
def igmp_exclude_empty_src_list():
self.igmp_exclude_empty_src_list()
df.callback(0)
reactor.callLater(0, igmp_exclude_empty_src_list)
return df
def igmp_join_sourceip_0_0_0_0(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
groups = groups1 + groups2
ip_src = '0.0.0.0'
self.igmp_send_joins_different_groups_srclist(groups,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['5.5.5.5']),
intf = self.V_INF1, delay = 2, ip_src = ip_src)
ip_src = self.IP_SRC
dst_mac = '01:00:5e:02:02:03'
src_ip = '5.5.5.5'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups2, df = df)
mcastTraffic1 = McastTraffic(groups2, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups2)
self.igmp_recv_task(self.V_INF1, groups2, join_state1)
log_test.info('Interface is receiving traffic from multicast groups %s when we sent join with source IP is 0.0.0.0' %groups2)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+20)
def test_igmp_join_sourceip_0_0_0_0(self):
df = defer.Deferred()
def igmp_join_sourceip_0_0_0_0():
self.igmp_join_sourceip_0_0_0_0(df = df)
df.callback(0)
reactor.callLater(0, igmp_join_sourceip_0_0_0_0)
return df
def igmp_invalid_join_packet(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MINVALIDGROUP1,)
groups = groups1 + groups2
ip_src = '1.1.1.1'
self.igmp_send_joins_different_groups_srclist(groups,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['5.5.5.5']),
intf = self.V_INF1, delay = 2, ip_src = ip_src)
ip_src = self.IP_SRC
dst_mac = '01:00:5e:02:02:03'
src_ip = '5.5.5.5'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups2, df = df)
mcastTraffic1 = McastTraffic(groups2, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups2)
target1 = self.igmp_not_recv_task(self.V_INF1, groups2, join_state1)
assert target1==1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s when we sent invalid join packet ' %groups2)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+20)
def test_igmp_invalid_join_packet(self):
df = defer.Deferred()
def igmp_invalid_join_packet():
self.igmp_invalid_join_packet(df = df)
df.callback(0)
reactor.callLater(0, igmp_invalid_join_packet)
return df
def igmp_join_data_receiving_during_subscriber_link_toggle(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
groups = groups1 + groups2
ip_src = '1.1.1.1'
self.igmp_send_joins_different_groups_srclist(groups,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['5.5.5.5']),
intf = self.V_INF1, delay = 2, ip_src = ip_src)
ip_src = self.IP_SRC
dst_mac = '01:00:5e:02:02:03'
src_ip = '5.5.5.5'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups2, df = df)
mcastTraffic1 = McastTraffic(groups2, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups2)
self.igmp_recv_task(self.V_INF1, groups2, join_state1)
log_test.info('Interface is receiving traffic from multicast groups, before bring down the self.V_INF1=%s ' %self.V_INF1)
os.system('ifconfig '+self.V_INF1+' down')
log_test.info(' the self.V_INF1 %s is down now ' %self.V_INF1)
os.system('ifconfig '+self.V_INF1)
time.sleep(10)
os.system('ifconfig '+self.V_INF1+' up')
os.system('ifconfig '+self.V_INF1)
log_test.info(' the self.V_INF1 %s is up now ' %self.V_INF1)
self.igmp_recv_task(self.V_INF1, groups2, join_state1)
log_test.info('Interface is receiving traffic from multicast groups %s when we nterface up after down ' %groups2)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+20)
def test_igmp_join_data_received_during_subscriber_link_toggle(self):
df = defer.Deferred()
def igmp_join_data_received_during_subscriber_link_toggle():
self.igmp_join_data_received_during_subscriber_link_toggle(df = df)
df.callback(0)
reactor.callLater(0, igmp_join_data_received_during_subscriber_link_toggle)
return df
def igmp_join_data_received_during_channel_distributor_link_toggle(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
groups = groups1 + groups2
ip_src = '1.1.1.1'
self.igmp_send_joins_different_groups_srclist(groups,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['5.5.5.5', '6.6.6.6']),
intf = self.V_INF1, delay = 2, ip_src = ip_src)
ip_src = self.IP_SRC
dst_mac1 = '01:00:5e:01:02:03'
dst_mac2 = '01:00:5e:02:02:03'
src_ip2 = '5.5.5.5'
src_ip1 = '2.2.2.2'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
igmpState2 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups2, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac1,
src_ip = src_ip1, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic2 = McastTraffic(groups2, iface= 'veth3', dst_mac = dst_mac2,
src_ip = src_ip2, cb = self.send_mcast_cb, arg = igmpState2)
mcastTraffic1.start()
mcastTraffic2.start()
join_state1 = IGMPTestState(groups = groups1)
join_state2 = IGMPTestState(groups = groups2)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
self.igmp_recv_task(self.V_INF1, groups2, join_state2)
mcastTraffic1.stop()
os.system('ifconfig '+'veth2'+' down')
os.system('ifconfig '+'veth2')
time.sleep(10)
self.igmp_not_recv_task(self.V_INF1, groups2, join_state1)
target1 = self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1==1, 'EXPECTED FAILURE'
os.system('ifconfig '+'veth2'+' up')
os.system('ifconfig '+'veth2')
time.sleep(10)
mcastTraffic1.start()
self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
self.igmp_recv_task(self.V_INF1, groups2, join_state2)
self.igmp_recv_task(self.V_INF1, groups2, join_state2)
mcastTraffic2.stop()
## This test case is failing to receive traffic from multicast data from defferent channel interfaces TO-DO
###### TO DO scenario #######
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+60)
def test_igmp_join_data_received_during_channel_distributors_link_toggle(self):
df = defer.Deferred()
def igmp_join_data_receiving_during_channel_distributor_link_toggle():
self.igmp_join_data_received_during_channel_distributor_link_toggle(df = df)
df.callback(0)
reactor.callLater(0, igmp_join_data_receiving_during_channel_distributor_link_toggle)
return df
def igmp_invalidClassD_IP_join_packet(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MINVALIDGROUP2,)
groups = groups1 + groups2
ip_src = '1.1.1.1'
self.igmp_send_joins_different_groups_srclist(groups,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['5.5.5.5']),
intf = self.V_INF1, delay = 2, ip_src = ip_src)
ip_src = self.IP_SRC
dst_mac = '01:00:5e:02:02:03'
src_ip = '5.5.5.5'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups2, df = df)
mcastTraffic1 = McastTraffic(groups2, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups2)
target1 = self.igmp_not_recv_task(self.V_INF1, groups2, join_state1)
assert target1==1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s when we sent invalid join packet ' %groups2)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+20)
def test_igmp_invalid_class_d_ip_for_join_packet(self):
df = defer.Deferred()
def igmp_invalidClass_D_IP_join_packet():
self.igmp_invalidClassD_IP_join_packet(df = df)
df.callback(0)
reactor.callLater(0, igmp_invalidClass_D_IP_join_packet)
return df
def igmp_invalidClassD_IP_as_srclistIP_join_packet(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
groups = groups1 + groups2
ip_src = '1.1.1.1'
self.igmp_send_joins_different_groups_srclist(groups,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['239.5.5.5']),
intf = self.V_INF1, delay = 2, ip_src = ip_src)
ip_src = self.IP_SRC
dst_mac = '01:00:5e:02:02:03'
src_ip = '5.5.5.5'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups2, df = df)
mcastTraffic1 = McastTraffic(groups2, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups2)
target1 = self.igmp_not_recv_task(self.V_INF1, groups2, join_state1)
assert target1==1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s when we sent invalid join packet ' %groups2)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+20)
def test_igmp_invalid_class_d_ip_as_srclist_ip_for_join_packet(self):
df = defer.Deferred()
def igmp_invalidClassD_IP_as_srclistIP_join_packet():
self.igmp_invalidClassD_IP_as_srclistIP_join_packet(df = df)
df.callback(0)
reactor.callLater(0, igmp_invalidClassD_IP_as_srclistIP_join_packet)
return df
def igmp_general_query_recv_packet(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
groups = groups1 + groups2
ip_src = '1.1.1.1'
self.igmp_send_joins_different_groups_srclist(groups,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['5.5.5.5']),
intf = self.V_INF1, delay = 2, ip_src = ip_src)
ip_src = self.IP_SRC
dst_mac = '01:00:5e:02:02:03'
src_ip = '5.5.5.5'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups2, df = df)
mcastTraffic1 = McastTraffic(groups2, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups2)
log_test.info('Started delay to verify multicast data taraffic for group %s is received or not for 180 sec ' %groups2)
time.sleep(100)
self.igmp_recv_task(self.V_INF1, groups2, join_state1)
log_test.info('Verified that multicast data for group %s is received after 100 sec ' %groups2)
time.sleep(50)
self.igmp_recv_task(self.V_INF1, groups2, join_state1)
log_test.info('Verified that multicast data for group %s is received after 150 sec ' %groups2)
time.sleep(30)
self.igmp_recv_task(self.V_INF1, groups2, join_state1)
log_test.info('Verified that multicast data for group %s is received after 180 sec ' %groups2)
time.sleep(10)
self.igmp_recv_task(self.V_INF1, groups2, join_state1)
log_test.info('Verified that multicast data for group %s is received after 190 sec ' %groups2)
target3 = mcastTraffic1.isRecvStopped()
assert target3==False, 'EXPECTED FAILURE'
log_test.info('Verified that multicast data for a group %s is still transmitting from a data interface' %groups2)
log_test.info('Now checking join interface is receiving a multicast data for group %s after 190 sec' %groups2)
target1 = self.igmp_not_recv_task(self.V_INF1, groups2, join_state1)
assert target1==1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving multicast data for group %s' %groups2)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+250)
def test_igmp_general_query_received_traffic(self):
df = defer.Deferred()
def igmp_general_query_recv_packet():
self.igmp_general_query_recv_packet(df = df)
df.callback(0)
reactor.callLater(0, igmp_general_query_recv_packet)
return df
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+100)
def test_igmp_query_received_on_joining_interface(self):
groups = ['224.0.1.10', '225.0.0.10']
leave_groups = ['224.0.1.10']
df = defer.Deferred()
igmpState = IGMPTestState(groups = groups, df = df)
igmpStateRecv = IGMPTestState(groups = groups, df = df)
igmpStateList = (igmpState, igmpStateRecv)
mcastTraffic = McastTraffic(groups, iface= 'veth2', cb = self.send_mcast_cb,
arg = igmpState)
self.df = df
self.mcastTraffic = mcastTraffic
self.recv_socket = L3PacketSocket(iface = 'veth0', type = ETH_P_IP)
def igmp_srp_task(stateList):
igmpSendState, igmpRecvState = stateList
if not mcastTraffic.isRecvStopped():
self.igmp_recv(igmpRecvState)
reactor.callLater(0, igmp_srp_task, stateList)
else:
self.mcastTraffic.stop()
self.recv_socket.close()
self.igmp_verify_leave(stateList, leave_groups)
self.df.callback(0)
log_test.info('Sending join packet and expect to receive on general query packet after 60 sec for multicast %s ' %groups)
self.send_igmp_join_recvQuery(groups)
log_test.info('Received a general query packet for multicast %s group on joing interface and sending traffic' %groups)
mcastTraffic.start()
self.test_timer = reactor.callLater(self.MCAST_TRAFFIC_TIMEOUT, self.mcast_traffic_timer)
reactor.callLater(0, igmp_srp_task, igmpStateList)
return df
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+190)
def test_igmp_for_periodic_query_received_on_joining_interface(self):
groups = ['224.0.1.10', '225.0.0.10']
leave_groups = ['224.0.1.10']
df = defer.Deferred()
igmpState = IGMPTestState(groups = groups, df = df)
mcastTraffic = McastTraffic(groups, iface= 'veth2', cb = self.send_mcast_cb,
arg = igmpState)
self.df = df
self.mcastTraffic = mcastTraffic
self.recv_socket = L3PacketSocket(iface = 'veth0', type = ETH_P_IP)
def igmp_srp_task(stateList):
igmpSendState, igmpRecvState = stateList
if not mcastTraffic.isRecvStopped():
self.igmp_recv(igmpRecvState)
reactor.callLater(0, igmp_srp_task, stateList)
else:
self.mcastTraffic.stop()
self.recv_socket.close()
self.igmp_verify_leave(stateList, leave_groups)
self.df.callback(0)
self.send_igmp_join_recvQuery(groups,3)
return df
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+190)
def test_igmp_for_periodic_query_received_and_checking_entry_deleted(self):
groups = ['224.0.1.10', '225.0.0.10']
leave_groups = ['224.0.1.10']
df = defer.Deferred()
igmpState = IGMPTestState(groups = groups, df = df)
igmpStateRecv = IGMPTestState(groups = groups, df = df)
igmpStateList = (igmpState, igmpStateRecv)
mcastTraffic = McastTraffic(groups, iface= 'veth2', cb = self.send_mcast_cb,
arg = igmpState)
self.df = df
self.mcastTraffic = mcastTraffic
self.recv_socket = L3PacketSocket(iface = 'veth0', type = ETH_P_IP)
def igmp_srp_task(stateList):
igmpSendState, igmpRecvState = stateList
if not mcastTraffic.isRecvStopped():
self.igmp_recv(igmpRecvState)
reactor.callLater(0, igmp_srp_task, stateList)
else:
self.mcastTraffic.stop()
self.recv_socket.close()
self.igmp_verify_leave(stateList, leave_groups)
self.df.callback(0)
self.send_igmp_join_recvQuery(groups,3)
log_test.info('Received periodic general query packets for multicast %s, now checking entry is deleted from tabel by sending traffic for that group' %groups)
mcastTraffic.start()
self.test_timer = reactor.callLater(self.MCAST_TRAFFIC_TIMEOUT, self.mcast_traffic_timer)
reactor.callLater(0, igmp_srp_task, igmpStateList)
return df
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+190)
def test_igmp_member_query_interval_and_expiry_for_rejoining_interface(self):
groups = ['224.0.1.10', '225.0.0.10']
leave_groups = ['224.0.1.10']
df = defer.Deferred()
igmpState = IGMPTestState(groups = groups, df = df)
igmpStateRecv = IGMPTestState(groups = groups, df = df)
igmpStateList = (igmpState, igmpStateRecv)
mcastTraffic = McastTraffic(groups, iface= 'veth2', cb = self.send_mcast_cb,
arg = igmpState)
self.df = df
self.mcastTraffic = mcastTraffic
self.recv_socket = L3PacketSocket(iface = 'veth0', type = ETH_P_IP)
def igmp_srp_task(stateList):
igmpSendState, igmpRecvState = stateList
if not mcastTraffic.isRecvStopped():
self.igmp_recv(igmpRecvState)
reactor.callLater(0, igmp_srp_task, stateList)
else:
self.mcastTraffic.stop()
self.recv_socket.close()
self.igmp_verify_leave(stateList, leave_groups)
self.df.callback(0)
self.send_igmp_join_recvQuery(groups,3)
log_test.info('Received periodic general query packets for multicast %s, now sending join packet again and verifying traffic for that group is received or not on joining interface' %groups)
self.send_igmp_join(groups)
mcastTraffic.start()
self.test_timer = reactor.callLater(self.MCAST_TRAFFIC_TIMEOUT, self.mcast_traffic_timer)
reactor.callLater(0, igmp_srp_task, igmpStateList)
return df
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+50)
def test_igmp_leave_received_group_and_source_specific_query(self):
groups = ['224.0.1.10', '225.0.0.10']
leave_groups = ['224.0.1.10']
df = defer.Deferred()
igmpState = IGMPTestState(groups = groups, df = df)
mcastTraffic = McastTraffic(groups, iface= 'veth2', cb = self.send_mcast_cb,
arg = igmpState)
self.df = df
self.mcastTraffic = mcastTraffic
self.recv_socket = L3PacketSocket(iface = 'veth0', type = ETH_P_IP)
def igmp_srp_task(stateList):
igmpSendState, igmpRecvState = stateList
if not mcastTraffic.isRecvStopped():
self.igmp_recv(igmpRecvState)
reactor.callLater(0, igmp_srp_task, stateList)
else:
self.mcastTraffic.stop()
self.recv_socket.close()
self.igmp_verify_leave(stateList, leave_groups)
self.df.callback(0)
self.send_igmp_join(groups)
self.send_igmp_leave_listening_group_specific_query(leave_groups, delay = 3)
return df
def igmp_change_to_exclude_src_list_check_for_group_source_specific_query(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
self.igmp_send_joins_different_groups_srclist(groups1 + groups2,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '5.5.5.5']),
intf = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:01:02:03'
src_ip = '2.2.2.2'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
self.send_igmp_leave_listening_group_specific_query(groups = groups1, src_list = ['2.2.2.2'], iface = self.V_INF1, delay =2)
time.sleep(10)
target2 = self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target2 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s after sending CHANGE_TO_EXCLUDE' %groups2)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+60)
def test_igmp_change_to_exclude_src_list_and_check_for_group_source_specific_query(self):
df = defer.Deferred()
def igmp_change_to_exclude_src_list_check_for_group_source_specific_query():
self.igmp_change_to_exclude_src_list_check_for_group_source_specific_query(df = df)
df.callback(0)
reactor.callLater(0, igmp_change_to_exclude_src_list_check_for_group_source_specific_query)
return df
def igmp_change_to_include_src_list_check_for_general_query(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
self.send_igmp_leave(groups = groups1, src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4'],
iface = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:01:02:03'
src_ip = '2.2.2.2'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
target1= self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
self.igmp_send_joins_different_groups_srclist_wait_query_packets(groups1 + groups2,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['6.6.6.6', '5.5.5.5']),
intf = self.V_INF1, delay = 2,query_group1 = 'group1', query_group2 = None)
time.sleep(10)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
log_test.info('Interface is receiving traffic from multicast groups %s after send Change to include message' %groups1)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+80)
def test_igmp_change_to_include_src_list_and_check_for_general_query(self):
df = defer.Deferred()
def igmp_change_to_include_src_list_check_for_general_query():
self.igmp_change_to_include_src_list_check_for_general_query(df = df)
df.callback(0)
reactor.callLater(0, igmp_change_to_include_src_list_check_for_general_query)
return df
def igmp_allow_new_src_list_check_for_general_query(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
self.igmp_send_joins_different_groups_srclist(groups1+groups2,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '5.5.5.5']),
intf = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:01:02:03'
src_ip = '6.6.6.6'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
self.igmp_send_joins_different_groups_srclist_wait_query_packets(groups1 + groups2, (['2.2.2.2', '6.6.6.6', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '5.5.5.5']),
intf = self.V_INF1, delay = 2, query_group1 = 'group1', query_group2 = None)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
log_test.info('Interface is receiving traffic from multicast groups %s after sending join with new source list' %groups1)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+80)
def test_igmp_allow_new_src_list_and_check_for_general_query(self):
df = defer.Deferred()
def igmp_allow_new_src_list_check_for_general_query():
self.igmp_allow_new_src_list_check_for_general_query(df = df)
df.callback(0)
reactor.callLater(0, igmp_allow_new_src_list_check_for_general_query)
return df
def igmp_block_old_src_list_check_for_group_source_specific_query(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
groups = groups1 + groups2
self.igmp_send_joins_different_groups_srclist(groups,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '5.5.5.5']),
intf = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:02:02:03'
src_ip = '5.5.5.5'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups2, df = df)
IGMPTestState(groups = groups2, df = df)
mcastTraffic1 = McastTraffic(groups2, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups2)
self.igmp_recv_task(self.V_INF1, groups2, join_state1)
log_test.info('Interface is receiving traffic from multicast groups %s' %groups2)
self.igmp_send_joins_different_groups_srclist_wait_query_packets(groups,
(['6.6.6.6', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '7.7.7.7']),
intf = self.V_INF1, delay = 2, query_group1 = 'group1', query_group2 = None)
target2 = self.igmp_not_recv_task(self.V_INF1, groups2, join_state1)
assert target2 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s after sending join with block old source list' %groups2)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+90)
def test_igmp_block_old_src_list_and_check_for_group_source_specific_query(self):
df = defer.Deferred()
def igmp_block_old_src_list_check_for_group_source_specific_query():
self.igmp_block_old_src_list_check_for_group_source_specific_query(df = df)
df.callback(0)
reactor.callLater(0, igmp_block_old_src_list_check_for_group_source_specific_query)
return df
def igmp_include_to_allow_src_list_check_for_general_query(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
self.igmp_send_joins_different_groups_srclist(groups1 + groups2,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '5.5.5.5']),
intf = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:01:02:03'
src_ip = '2.2.2.2'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
self.igmp_send_joins_different_groups_srclist_wait_query_packets(groups1 + groups2,(['2.2.2.2', '3.3.3.3', '4.4.4.4', '6.6.6.6'], ['2.2.2.2', '5.5.5.5']), intf = self.V_INF1, delay = 2, query_group1 = 'group1', query_group2 = None)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+40)
def test_igmp_include_to_allow_src_list_and_check_for_general_query(self):
df = defer.Deferred()
def igmp_include_to_allow_src_list_check_for_general_query():
self.igmp_include_to_allow_src_list_check_for_general_query(df = df)
df.callback(0)
reactor.callLater(0, igmp_include_to_allow_src_list_check_for_general_query)
return df
def igmp_include_to_block_src_list_check_for_group_source_specific_query(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
self.igmp_send_joins_different_groups_srclist(groups1 + groups2,
(['2.2.2.2', '3.3.3.3', '4.4.4.4'], ['2.2.2.2', '5.5.5.5']),
intf = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:01:02:03'
src_ip = '2.2.2.2'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
self.send_igmp_leave_listening_group_specific_query(groups = groups1, src_list = ['6.6.6.6','7.7.7.7'],
iface = self.V_INF1, delay = 2)
self.igmp_recv_task(self.V_INF1, groups1, join_state1)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+40)
def test_igmp_include_to_block_src_list_and_check_for_group_source_specific_query(self):
df = defer.Deferred()
def igmp_include_to_block_src_list_check_for_group_source_specific_query():
self.igmp_include_to_block_src_list_check_for_group_source_specific_query(df = df)
df.callback(0)
reactor.callLater(0, igmp_include_to_block_src_list_check_for_group_source_specific_query)
return df
def igmp_exclude_to_allow_src_list_check_for_general_query(self, df = None):
groups1 = (self.MGROUP1,)
groups2 = (self.MGROUP2,)
self.send_igmp_leave(groups = groups1, src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4'],
iface = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:01:02:03'
src_ip = '2.2.2.2'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
target1= self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
self.igmp_send_joins_different_groups_srclist_wait_query_packets(groups1 + groups2,
(['6.6.6.6', '7.7.7.7', '8.8.8.8'], ['6.6.6.6', '5.5.5.5']), intf = self.V_INF1, delay = 2, query_group1 = 'group1', query_group2 = None)
target1= self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+90)
def test_igmp_exclude_to_allow_src_list_and_check_for_general_query(self):
df = defer.Deferred()
def igmp_exclude_to_allow_src_list_check_for_general_query():
self.igmp_exclude_to_allow_src_list_check_for_general_query(df = df)
df.callback(0)
reactor.callLater(0, igmp_exclude_to_allow_src_list_check_for_general_query)
return df
def igmp_exclude_to_block_src_list_check_for_group_source_specific_query(self, df = None):
groups1 = (self.MGROUP1,)
self.send_igmp_leave(groups = groups1, src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4'],
iface = self.V_INF1, delay = 2)
dst_mac = '01:00:5e:01:02:03'
src_ip = '2.2.2.2'
if df is None:
df = defer.Deferred()
igmpState1 = IGMPTestState(groups = groups1, df = df)
IGMPTestState(groups = groups1, df = df)
mcastTraffic1 = McastTraffic(groups1, iface= 'veth2', dst_mac = dst_mac,
src_ip = src_ip, cb = self.send_mcast_cb, arg = igmpState1)
mcastTraffic1.start()
join_state1 = IGMPTestState(groups = groups1)
target1= self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
self.send_igmp_leave_listening_group_specific_query(groups = groups1,
src_list = ['2.2.2.2', '3.3.3.3', '4.4.4.4', '5.5.5.5', '7.7.7.7'],
iface = self.V_INF1, delay = 2)
target1= self.igmp_not_recv_task(self.V_INF1, groups1, join_state1)
assert target1 == 1, 'EXPECTED FAILURE'
log_test.info('Interface is not receiving traffic from multicast groups %s' %groups1)
mcastTraffic1.stop()
@deferred(timeout=MCAST_TRAFFIC_TIMEOUT+40)
def test_igmp_exclude_to_block_src_list_and_check_for_group_source_specific_query(self):
df = defer.Deferred()
def igmp_exclude_to_block_src_list_check_for_group_source_specific_query():
self.igmp_exclude_to_block_src_list_check_for_group_source_specific_query(df = df)
df.callback(0)
reactor.callLater(0, igmp_exclude_to_block_src_list_check_for_group_source_specific_query)
return df
def iptomac(self, mcast_ip):
mcast_mac = '01:00:5e:'
octets = mcast_ip.split('.')
second_oct = int(octets[1]) & 127
third_oct = int(octets[2])
fourth_oct = int(octets[3])
mcast_mac = mcast_mac + format(second_oct,'02x') + ':' + format(third_oct, '02x') + ':' + format(fourth_oct, '02x')
return mcast_mac
def send_multicast_data_traffic(self, group, intf= 'veth2',source = '1.2.3.4'):
dst_mac = self.iptomac(group)
eth = Ether(dst= dst_mac)
ip = IP(dst=group,src=source)
data = repr(monotonic.monotonic())
sendp(eth/ip/data,count=20, iface = intf)
def verify_igmp_data_traffic(self, group, intf='veth0', source='1.2.3.4' ):
log_test.info('verifying multicast traffic for group %s from source %s'%(group,source))
self.success = False
def recv_task():
def igmp_recv_cb(pkt):
#log_test.info('received multicast data packet is %s'%pkt.show())
log_test.info('multicast data received for group %s from source %s'%(group,source))
self.success = True
sniff(prn = igmp_recv_cb,lfilter = lambda p: IP in p and p[IP].dst == group and p[IP].src == source, count=1,timeout = 2, iface='veth0')
t = threading.Thread(target = recv_task)
t.start()
self.send_multicast_data_traffic(group,source=source)
t.join()
return self.success
def test_igmp_include_exclude_modes(self):
groups = ['224.2.3.4','230.5.6.7']
src_list = ['2.2.2.2','3.3.3.3']
self.onos_ssm_table_load(groups, src_list=src_list)
self.send_igmp_join(groups = [groups[0]], src_list = src_list,record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1, delay = 2)
self.send_igmp_join(groups = [groups[1]], src_list = src_list,record_type = IGMP_V3_GR_TYPE_EXCLUDE,
iface = self.V_INF1, delay = 2)
status = self.verify_igmp_data_traffic(groups[0],intf=self.V_INF1,source=src_list[0])
assert_equal(status,True)
status = self.verify_igmp_data_traffic(groups[1],intf = self.V_INF1,source= src_list[1])
assert_equal(status,False)
def test_igmp_allow_new_source_mode(self):
group = ['224.8.9.3']
src_list = ['2.2.2.2','3.3.3.3']
#dst_mac = self.iptomac(group[0])
self.onos_ssm_table_load(group, src_list)
self.send_igmp_join(groups = group, src_list = src_list[0],record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1, delay = 1)
status = self.verify_igmp_data_traffic(group[0], intf=self.V_INF1,source = src_list[0])
assert_equal(status,True) # expecting igmp data traffic from source src_list[0]
self.send_igmp_join(groups = group, src_list = src_list[1],record_type = IGMP_V3_GR_TYPE_ALLOW_NEW,
iface = self.V_INF1, delay = 1)
for src in src_list:
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1, source=src)
assert_equal(status,True) # expecting igmp data traffic from both sources
def test_igmp_include_to_exclude_mode_change(self):
group = ['224.2.3.4']
src_list = ['2.2.2.2','3.3.3.3']
self.onos_ssm_table_load(group, src_list)
self.send_igmp_join(groups = group, src_list = src_list[0],record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1, delay = 1)
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1,source= src_list[0])
assert_equal(status,True) # expecting igmp data traffic from source src_list[0]
self.send_igmp_join(groups = group, src_list = src_list[1],record_type = IGMP_V3_GR_TYPE_EXCLUDE,
iface = self.V_INF1, delay = 1)
for src in src_list:
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1,source= src)
assert_equal(status,False) # expecting igmp data traffic from both sources
def test_igmp_exclude_to_include_mode_change(self):
group = ['224.2.3.4']
src = ['2.2.2.2']
self.onos_ssm_table_load(group, src)
self.send_igmp_join(groups = group, src_list = src,record_type = IGMP_V3_GR_TYPE_EXCLUDE,
iface = self.V_INF1, delay = 1)
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1,source=src[0])
assert_equal(status,False) # not expecting igmp data traffic from source src_list[0]
self.send_igmp_join(groups = group, src_list = src,record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1, delay = 1)
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1,source = src[0])
assert_equal(status,True) # expecting igmp data traffic from both sources
#this test case wotks properly if the snooping device(ONOS) have multicast router connected.
def test_igmp_to_include_mode_with_null_source(self):
groups = ['224.2.3.4','230.7.9.8']
src = ['192.168.12.34']
dst_mac = []
dst_mac.append(self.iptomac(groups[0]))
dst_mac.append(self.iptomac(groups[1]))
self.onos_ssm_table_load(groups, src)
self.send_igmp_join(groups = groups, src_list = src,record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1, delay = 1)
for grp in groups:
status = self.verify_igmp_data_traffic(grp,intf=self.V_INF1,source= src[0])
assert_equal(status,True) # not expecting igmp data traffic from source src_list[0]
#sending leave packet for group groups[1]
self.send_igmp_join(groups = [groups[1]], src_list = [],record_type = IGMP_V3_GR_TYPE_CHANGE_TO_INCLUDE,
iface = self.V_INF1, delay = 1)
for grp in groups:
status = self.verify_igmp_data_traffic(grp,intf=self.V_INF1,source= src[0])
if grp is groups[0]:
assert_equal(status,True) # expecting igmp data traffic to group groups[0]
else:
assert_equal(status,False) # not expecting igmp data traffic to group groups[1]
def test_igmp_to_include_mode(self):
group = ['229.9.3.6']
src_list = ['192.168.12.34','192.18.1.34']
self.onos_ssm_table_load(group, src_list)
self.send_igmp_join(groups = group, src_list = [src_list[0]],record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1, delay = 1)
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1,source=src_list[0])
assert_equal(status,True) # not expecting igmp data traffic from source src_list[0]
self.send_igmp_join(groups = group, src_list = src_list,record_type = IGMP_V3_GR_TYPE_CHANGE_TO_INCLUDE,
iface = self.V_INF1, delay = 1)
for src in src_list:
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1,source= src)
assert_equal(status,True) # expecting igmp data traffic to group groups[0]
#this test case passed only if mulitcast router connected to ONOS.
def test_igmp_blocking_old_source_mode(self):
group = ['224.2.3.4']
src_list = ['2.2.2.2','3.3.3.3']
self.onos_ssm_table_load(group, src_list)
self.send_igmp_join(groups = group, src_list = src_list,record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1, delay = 1)
for src in src_list:
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1, source=src)
assert_equal(status,True) # expecting igmp data traffic from source src_list[0]
self.send_igmp_join(groups = group, src_list = [src_list[1]],record_type = IGMP_V3_GR_TYPE_BLOCK_OLD,
iface = self.V_INF1, delay = 1)
for src in src_list:
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1, source=src)
if src is src_list[0]:
assert_equal(status,True) # expecting igmp data traffic from source src_list[0]
else:
assert_equal(status,False) # not expecting igmp data traffic from source src_list[1]
def test_igmp_multiple_joins_and_data_verification_with_100_groups(self):
groups = []
sources = []
count = 1
mcastips = self.mcast_ip_range(start_ip = '226.0.0.1',end_ip = '226.0.5.254')
sourceips = self.source_ip_range(start_ip = '10.10.0.1',end_ip = '10.10.5.254')
while count<=100:
group = random.choice(mcastips)
source = random.choice(sourceips)
if group in groups:
pass
else:
log_test.info('group and source are %s and %s'%(group,source))
groups.append(group)
sources.append(source)
count += 1
self.onos_ssm_table_load(groups,src_list=sources,flag=True)
for i in range(100):
self.send_igmp_join(groups = [groups[i]], src_list = [sources[i]],record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1)
status = self.verify_igmp_data_traffic(groups[i],intf=self.V_INF1,source=sources[i])
assert_equal(status, True)
log_test.info('data received for group %s from source %s'%(groups[i],sources[i]))
def test_igmp_multiple_joins_with_data_verification_and_leaving_100_groups(self):
groups = []
sources = []
count = 1
mcastips = self.mcast_ip_range(start_ip = '226.0.0.1',end_ip = '226.0.5.254')
sourceips = self.source_ip_range(start_ip = '10.10.0.1',end_ip = '10.10.5.254')
while count<=100:
group = random.choice(mcastips)
source = random.choice(sourceips)
if group in groups:
pass
else:
log_test.info('group and source are %s and %s'%(group,source))
groups.append(group)
sources.append(source)
count += 1
self.onos_ssm_table_load(groups,src_list=sources,flag=True)
for i in range(100):
self.send_igmp_join(groups = [groups[i]], src_list = [sources[i]],record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1)
status = self.verify_igmp_data_traffic(groups[i],intf=self.V_INF1,source=sources[i])
assert_equal(status, True)
log_test.info('data received for group %s from source %s'%(groups[i],sources[i]))
self.send_igmp_join(groups = [groups[i]], src_list = [sources[i]],record_type = IGMP_V3_GR_TYPE_CHANGE_TO_EXCLUDE,
iface = self.V_INF1, delay = 1)
status = self.verify_igmp_data_traffic(groups[i],intf=self.V_INF1,source=sources[i])
assert_equal(status, False)
log_test.info("data not received for group %s from source %s after changing group mode to 'TO-EXCLUDE' mode"%(groups[i],sources[i]))
def test_igmp_group_source_for_only_config_with_1000_entries(self):
groups = []
sources = []
count = 1
mcastips = self.mcast_ip_range(start_ip = '229.0.0.1',end_ip = '229.0.50.254')
sourceips = self.source_ip_range(start_ip = '10.10.0.1',end_ip = '10.10.50.254')
while count<=1000:
group = random.choice(mcastips)
source = random.choice(sourceips)
if group in groups:
pass
else:
log_test.info('group and source are %s and %s'%(group,source))
groups.append(group)
sources.append(source)
count += 1
self.onos_ssm_table_load(groups,src_list=sources,flag=True)
def test_igmp_from_exclude_to_include_mode_with_100_groups(self):
groups = []
sources = []
count = 1
mcastips = self.mcast_ip_range(start_ip = '229.0.0.1',end_ip = '229.0.10.254')
sourceips = self.source_ip_range(start_ip = '10.10.0.1',end_ip = '10.10.10.254')
while count<=100:
group = random.choice(mcastips)
source = random.choice(sourceips)
if group in groups:
pass
else:
log_test.info('group and source are %s and %s'%(group,source))
groups.append(group)
sources.append(source)
count += 1
self.onos_ssm_table_load(groups,src_list=sources,flag=True)
for i in range(100):
self.send_igmp_join(groups = [groups[i]], src_list = [sources[i]],record_type = IGMP_V3_GR_TYPE_EXCLUDE,
iface = self.V_INF1)
status = self.verify_igmp_data_traffic(groups[i],intf=self.V_INF1,source=sources[i])
assert_equal(status, False)
log_test.info('data not received for group %s from source %s as expected'%(groups[i],sources[i]))
self.send_igmp_join(groups = [groups[i]], src_list = [sources[i]],record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1)
status = self.verify_igmp_data_traffic(groups[i],intf=self.V_INF1,source=sources[i])
assert_equal(status, True)
log_test.info("data received for group %s from source %s after changing group mode to 'TO-INCLUDE' mode"%(groups[i],sources[i]))
def test_igmp_with_multiple_joins_and_data_verify_with_1000_groups(self):
groups = []
sources = []
count = 1
mcastips = self.mcast_ip_range(start_ip = '229.0.0.1',end_ip = '229.0.30.254')
sourceips = self.source_ip_range(start_ip = '10.10.0.1',end_ip = '10.10.30.254')
while count<=1000:
group = random.choice(mcastips)
source = random.choice(sourceips)
if group in groups:
pass
else:
log_test.info('group and source are %s and %s'%(group,source))
groups.append(group)
sources.append(source)
count += 1
self.onos_ssm_table_load(groups,src_list=sources,flag=True)
for i in range(1000):
self.send_igmp_join(groups = [groups[i]], src_list = [sources[i]],record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1)
status = self.verify_igmp_data_traffic(groups[i],intf=self.V_INF1,source=sources[i])
assert_equal(status, True)
log_test.info('data received for group %s from source %s - %d'%(groups[i],sources[i],i))
def test_igmp_with_multiple_joins_and_data_verify_with_5000_groups(self):
groups = []
sources = []
count = 1
mcastips = self.mcast_ip_range(start_ip = '231.39.19.121',end_ip = '231.40.30.25')
sourceips = self.source_ip_range(start_ip = '192.168.56.43',end_ip = '192.169.110.30')
while count<=5000:
group = random.choice(mcastips)
source = random.choice(sourceips)
if group in groups:
pass
else:
log_test.info('group and source are %s and %s'%(group,source))
groups.append(group)
sources.append(source)
count += 1
self.onos_ssm_table_load(groups,src_list=sources,flag=True)
for i in range(5000):
self.send_igmp_join(groups = [groups[i]], src_list = [sources[i]],record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1)
status = self.verify_igmp_data_traffic(groups[i],intf=self.V_INF1,source=sources[i])
assert_equal(status, True)
log_test.info('data received for group %s from source %s - %d'%(groups[i],sources[i],i))
"""def test_igmp_join_from_multiple_infts(self):
groups = ['229.9.3.6','234.20.56.2']
src_list = ['192.168.12.34','192.18.1.34']
self.onos_ssm_table_load(groups, src_list=src_list)
self.send_igmp_join(groups = [groups[0]], src_list = src_list,record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = 'veth0')
self.send_igmp_join(groups = [groups[1]], src_list = src_list,record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = 'veth2')
status = self.verify_igmp_data_traffic(groups[0],intf='veth0',source=src_list[0])
assert_equal(status,True)
status = self.verify_igmp_data_traffic(groups[1],intf='veth2',source=src_list[1])
assert_equal(status,True) # not expecting igmp data traffic from source src_list[0]
"""
def test_igmp_send_data_to_non_registered_group(self):
group = ['224.2.3.4']
src = ['2.2.2.2']
self.onos_ssm_table_load(group,src_list= src)
self.send_igmp_join(groups = ['239.0.0.1'], src_list = src,record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1, delay = 1)
status = self.verify_igmp_data_traffic('239.0.0.1',intf=self.V_INF1,source=src[0])
assert_equal(status,False) # not expecting igmp data traffic from source src_list[0]
def test_igmp_traffic_verification_for_registered_group_with_no_join_sent(self):
group = ['227.12.3.40']
src = ['190.4.19.67']
self.onos_ssm_table_load(group,src_list= src)
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1,source=src[0])
assert_equal(status,False) # not expecting igmp data traffic from source src_list[0]
def test_igmp_toggling_app_activation(self):
group = [self.random_mcast_ip()]
src = [self.randomsourceip()]
self.onos_ssm_table_load(group,src_list= src)
self.send_igmp_join(groups = group, src_list = src,record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1)
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1,source=src[0])
assert_equal(status,True) # expecting igmp data traffic from source src_list[0]
log_test.info('Multicast traffic received for group %s from source %s before the app is deactivated'%(group[0],src[0]))
self.onos_ctrl.deactivate()
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1,source=src[0])
assert_equal(status,False) #not expecting igmp data traffic from source src_list[0]
log_test.info('Multicast traffic not received for group %s from source %s after the app is deactivated'%(group[0],src[0]))
self.onos_ctrl.activate()
status = self.verify_igmp_data_traffic(group[0],intf=self.V_INF1,source=src[0])
assert_equal(status,True) # expecting igmp data traffic from source src_list[0]
log_test.info('Multicast traffic received for group %s from source %s the app is re-activated'%(group[0],src[0]))
def test_igmp_with_mismatch_for_dst_ip_and_mac_in_data_packets(self):
group = ['228.18.19.29']
source = [self.randomsourceip()]
self.onos_ssm_table_load(group,src_list= source)
self.send_igmp_join(groups = group, src_list = source,record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1)
dst_mac = '01:00:5e:0A:12:09'
eth = Ether(dst= dst_mac)
ip = IP(dst=group[0],src=source[0])
data = repr(monotonic.monotonic())
pkt = (eth/ip/data)
log_test.info('Multicast traffic packet %s'%pkt.show())
self.success = False
def recv_task():
def igmp_recv_cb(pkt):
#log_test.info('received multicast data packet is %s'%pkt.show())
log_test.info('multicast data received for group %s from source %s'%(group[0],source[0]))
self.success = True
sniff(prn = igmp_recv_cb,lfilter = lambda p: IP in p and p[IP].dst == group[0] and p[IP].src == source[0], count=1,timeout = 2, iface='veth0')
t = threading.Thread(target = recv_task)
t.start()
sendp(eth/ip/data,count=20, iface = 'veth2')
t.join()
assert_equal(status,False) # not expecting igmp data traffic from source src_list[0]
#test case failing, ONOS registering unicast ip also as an igmp join
def test_igmp_registering_invalid_group(self):
groups = ['218.18.19.29']
source = [self.randomsourceip()]
ssm_dict = {'apps' : { 'org.opencord.igmp' : { 'ssmTranslate' : [] } } }
ssm_xlate_list = ssm_dict['apps']['org.opencord.igmp']['ssmTranslate']
for g in groups:
for s in source:
d = {}
d['source'] = s or '0.0.0.0'
d['group'] = g
ssm_xlate_list.append(d)
log_test.info('onos load config is %s'%ssm_dict)
status, code = OnosCtrl.config(ssm_dict)
self.send_igmp_join(groups, src_list = source, record_type = IGMP_V3_GR_TYPE_INCLUDE,
iface = self.V_INF1, delay = 1)
status = self.verify_igmp_data_traffic(groups[0],intf=self.V_INF1, source=source[0])
assert_equal(status,False) # not expecting igmp data traffic from source src_list[0]
def test_igmp_registering_invalid_source(self):
groups = [self.random_mcast_ip()]
sources = ['224.10.28.34','193.73.219.257']
ssm_dict = {'apps' : { 'org.opencord.igmp' : { 'ssmTranslate' : [] } } }
ssm_xlate_list = ssm_dict['apps']['org.opencord.igmp']['ssmTranslate']
for g in groups:
for s in sources:
d = {}
d['source'] = s or '0.0.0.0'
d['group'] = g
ssm_xlate_list.append(d)
log_test.info('onos load config is %s'%ssm_dict)
status, code = OnosCtrl.config(ssm_dict)
assert_equal(status,False)
| 49.723423 | 285 | 0.605829 | 14,973 | 110,386 | 4.2224 | 0.038336 | 0.029009 | 0.023631 | 0.004302 | 0.855699 | 0.828367 | 0.807266 | 0.782781 | 0.758644 | 0.732293 | 0 | 0.049844 | 0.285725 | 110,386 | 2,219 | 286 | 49.745831 | 0.751998 | 0.033473 | 0 | 0.673695 | 0 | 0.00251 | 0.103598 | 0.000605 | 0 | 0 | 0.000378 | 0 | 0.031627 | 0 | null | null | 0.003012 | 0.009036 | null | null | 0.00251 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
166eaecfb0b444b474aa83083b31a8909f5a8e8d | 109 | py | Python | sfa/control/__init__.py | milanoscookie/sfa | a460d0a6e65ed4d32955c637d123f991f07351e0 | [
"BSD-3-Clause"
] | 13 | 2018-05-19T04:11:07.000Z | 2022-02-23T19:50:33.000Z | sfa/control/__init__.py | milanoscookie/sfa | a460d0a6e65ed4d32955c637d123f991f07351e0 | [
"BSD-3-Clause"
] | 2 | 2020-06-23T18:20:47.000Z | 2020-06-24T13:48:00.000Z | sfa/control/__init__.py | milanoscookie/sfa | a460d0a6e65ed4d32955c637d123f991f07351e0 | [
"BSD-3-Clause"
] | 5 | 2018-05-18T00:15:45.000Z | 2021-09-27T07:09:03.000Z |
from .influence import compute_influence
from .influence import arrange_si
from .influence import prioritize | 27.25 | 40 | 0.862385 | 14 | 109 | 6.571429 | 0.5 | 0.423913 | 0.619565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110092 | 109 | 4 | 41 | 27.25 | 0.948454 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1670667461b4b0ecd433de90e6ea2119f62a1961 | 14,080 | py | Python | assets/Update.py | EnderNightLord-ChromeBook/Juice-File-Manager | d408586df6d4f613bbf72171460befcaefb9abce | [
"Apache-2.0"
] | 1 | 2017-09-29T00:09:50.000Z | 2017-09-29T00:09:50.000Z | assets/Update.py | EnderNightLord-ChromeBook/Juice-File-Manager | d408586df6d4f613bbf72171460befcaefb9abce | [
"Apache-2.0"
] | 1 | 2019-03-28T22:03:25.000Z | 2019-09-02T06:19:43.000Z | assets/Update.py | EnderNightLord-ChromeBook/Juice-File-Manager | d408586df6d4f613bbf72171460befcaefb9abce | [
"Apache-2.0"
] | null | null | null | import os
import shutil
ans = input('''
=======================================
Juice File Manager 4.0 Update
=======================================
This will update all your packages!!!
(Y/N)#: ''')
if ans == 'y':
ans = input('''
=======================================
Juice File Manager 4.0 Update
=======================================
what kind of install do you have?
1. ~/Downloads/JFM/
2. /JFM/
3. /root/JFM/
4. ~/JFM/
#: ''')
if ans == '1':
os.system("rm -r ~/Downloads/JFM/Assets/Add.py ~/Downloads/JFM/Assets/Check.py ~/Downloads/JFM/Assets/Copy.py ~/Downloads/JFM/Assets/Delete.py ~/Downloads/JFM/Assets/DirList.py ~/Downloads/JFM/Assets/Extra.py ~/Downloads/JFM/Assets/Information.py ~/Downloads/JFM/Assets/MakeDir.py ~/Downloads/JFM/Assets/Move.py ~/Downloads/JFM/Assets/OpenFile.py ~/Downloads/JFM/Assets/OpenWeb.py ~/Downloads/JFM/Assets/Read.py ~/Downloads/JFM/Assets/RemoveDir.py ~/Downloads/JFM/Assets/Write.py ~/Downloads/JFM/Assets/Update.py")
print('[<-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Add.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Check.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Copy.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Delete.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/DirList.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Extra.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Information.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/MakeDir.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Move.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/OpenFile.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/OpenWeb.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Read.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/RemoveDir.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Update.py -q -P ~/Downloads/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Write.py -q -P ~/Downloads/JFM/Assets")
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/ShowReadme.py -q -P ~/Downloads/JFM/Assets")
print('DONE')
if ans == '2':
os.system("rm -r /JFM/Assets/Add.py /JFM/Assets/Check.py /JFM/Assets/Copy.py /JFM/Assets/Delete.py /JFM/Assets/DirList.py /JFM/Assets/Extra.py /JFM/Assets/Information.py /JFM/Assets/MakeDir.py /JFM/Assets/Move.py /JFM/Assets/OpenFile.py /JFM/Assets/OpenWeb.py /JFM/Assets/Read.py /JFM/Assets/RemoveDir.py /JFM/Assets/Write.py /JFM/Assets/Update.py")
print('[<-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Add.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Check.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Copy.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Delete.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/DirList.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Extra.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Information.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/MakeDir.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Move.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/OpenFile.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/OpenWeb.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Read.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/RemoveDir.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Update.py -q -P /JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Write.py -q -P /JFM/Assets")
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/ShowReadme.py -q -P /JFM/Assets")
print('DONE')
if ans == '3':
os.system("rm -r /root/JFM/Assets/Add.py /root/JFM/Assets/Check.py /root/JFM/Assets/Copy.py /root/JFM/Assets/Delete.py /root/JFM/Assets/DirList.py /root/JFM/Assets/Extra.py /root/JFM/Assets/Information.py /root/JFM/Assets/MakeDir.py /root/JFM/Assets/Move.py /root/JFM/Assets/OpenFile.py /root/JFM/Assets/OpenWeb.py /root/JFM/Assets/Read.py /root/JFM/Assets/RemoveDir.py /root/JFM/Assets/Write.py /root/JFM/Assets/Update.py")
print('[<-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Add.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Check.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Copy.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Delete.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/DirList.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Extra.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Information.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/MakeDir.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Move.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/OpenFile.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/OpenWeb.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Read.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/RemoveDir.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Update.py -q -P /root/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Write.py -q -P /root/JFM/Assets")
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/ShowReadme.py -q -P /root/JFM/Assets")
print('DONE')
if ans == '4':
os.system("rm -r ~/JFM/Assets/Add.py ~/JFM/Assets/Check.py ~/JFM/Assets/Copy.py ~/JFM/Assets/Delete.py ~/JFM/Assets/DirList.py ~/JFM/Assets/Extra.py ~/JFM/Assets/Information.py ~/JFM/Assets/MakeDir.py ~/JFM/Assets/Move.py ~/JFM/Assets/OpenFile.py ~/JFM/Assets/OpenWeb.py ~/JFM/Assets/Read.py ~/JFM/Assets/RemoveDir.py ~/JFM/Assets/Write.py ~/JFM/Assets/Update.py")
print('[<-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Add.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Check.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Copy.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Delete.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/DirList.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Extra.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Information.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/MakeDir.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Move.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/OpenFile.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/OpenWeb.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Read.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/RemoveDir.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Update.py -q -P ~/JFM/Assets")
print('[ <-> ]')
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/Write.py -q -P ~/JFM/Assets")
os.system("wget https://raw.githubusercontent.com/EnderNightLord-ChromeBook/JuiceFileManagerMinimal/master/assets/ShowReadme.py -q -P ~/JFM/Assets")
print('DONE')
| 83.313609 | 518 | 0.680327 | 1,598 | 14,080 | 5.994368 | 0.040676 | 0.116505 | 0.080175 | 0.113582 | 0.914814 | 0.914814 | 0.911055 | 0.906775 | 0.900094 | 0.900094 | 0 | 0.000992 | 0.14098 | 14,080 | 168 | 519 | 83.809524 | 0.790988 | 0 | 0 | 0.464516 | 0 | 0.43871 | 0.837855 | 0.128764 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012903 | 0 | 0.012903 | 0.412903 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
1698ed3239f7458ceab1deccfb9a68a1c6741a2c | 7,410 | py | Python | server/generator/item_consts.py | zzazzdzz/fools2019 | a0e9b7a100c13a9f08d202c90ca52498ae1d769b | [
"MIT"
] | 12 | 2018-10-26T12:00:31.000Z | 2020-06-10T01:10:35.000Z | server/generator/item_consts.py | stranck/fools2019 | d589aa20b554a6c88f437c56091baf27b325f104 | [
"MIT"
] | 3 | 2019-03-07T14:25:42.000Z | 2019-03-09T23:17:02.000Z | server/generator/item_consts.py | stranck/fools2019 | d589aa20b554a6c88f437c56091baf27b325f104 | [
"MIT"
] | 8 | 2019-01-17T16:04:47.000Z | 2021-10-21T01:11:40.000Z | NO_ITEM = 0x00
MASTER_BALL = 0x01
ULTRA_BALL = 0x02
BRIGHTPOWDER = 0x03
GREAT_BALL = 0x04
POKE_BALL = 0x05
TOWN_MAP = 0x06
BICYCLE = 0x07
MOON_STONE = 0x08
ANTIDOTE = 0x09
BURN_HEAL = 0x0a
ICE_HEAL = 0x0b
AWAKENING = 0x0c
PARLYZ_HEAL = 0x0d
FULL_RESTORE = 0x0e
MAX_POTION = 0x0f
HYPER_POTION = 0x10
SUPER_POTION = 0x11
POTION = 0x12
ESCAPE_ROPE = 0x13
REPEL = 0x14
MAX_ELIXER = 0x15
FIRE_STONE = 0x16
THUNDERSTONE = 0x17
WATER_STONE = 0x18
ITEM_19 = 0x19
HP_UP = 0x1a
PROTEIN = 0x1b
IRON = 0x1c
CARBOS = 0x1d
LUCKY_PUNCH = 0x1e
CALCIUM = 0x1f
RARE_CANDY = 0x20
X_ACCURACY = 0x21
LEAF_STONE = 0x22
METAL_POWDER = 0x23
NUGGET = 0x24
POKE_DOLL = 0x25
FULL_HEAL = 0x26
REVIVE = 0x27
MAX_REVIVE = 0x28
GUARD_SPEC = 0x29
SUPER_REPEL = 0x2a
MAX_REPEL = 0x2b
DIRE_HIT = 0x2c
ITEM_2D = 0x2d
FRESH_WATER = 0x2e
SODA_POP = 0x2f
LEMONADE = 0x30
X_ATTACK = 0x31
ITEM_32 = 0x32
X_DEFEND = 0x33
X_SPEED = 0x34
X_SPECIAL = 0x35
COIN_CASE = 0x36
ITEMFINDER = 0x37
POKE_FLUTE = 0x38
EXP_SHARE = 0x39
OLD_ROD = 0x3a
GOOD_ROD = 0x3b
SILVER_LEAF = 0x3c
SUPER_ROD = 0x3d
PP_UP = 0x3e
ETHER = 0x3f
MAX_ETHER = 0x40
ELIXER = 0x41
RED_SCALE = 0x42
SECRETPOTION = 0x43
S_S_TICKET = 0x44
MYSTERY_EGG = 0x45
CLEAR_BELL = 0x46
SILVER_WING = 0x47
MOOMOO_MILK = 0x48
QUICK_CLAW = 0x49
PSNCUREBERRY = 0x4a
GOLD_LEAF = 0x4b
SOFT_SAND = 0x4c
SHARP_BEAK = 0x4d
PRZCUREBERRY = 0x4e
BURNT_BERRY = 0x4f
ICE_BERRY = 0x50
POISON_BARB = 0x51
KINGS_ROCK = 0x52
BITTER_BERRY = 0x53
MINT_BERRY = 0x54
RED_APRICORN = 0x55
TINYMUSHROOM = 0x56
BIG_MUSHROOM = 0x57
SILVERPOWDER = 0x58
BLU_APRICORN = 0x59
ITEM_5A = 0x5a
AMULET_COIN = 0x5b
YLW_APRICORN = 0x5c
GRN_APRICORN = 0x5d
CLEANSE_TAG = 0x5e
MYSTIC_WATER = 0x5f
TWISTEDSPOON = 0x60
WHT_APRICORN = 0x61
BLACKBELT = 0x62
BLK_APRICORN = 0x63
ITEM_64 = 0x64
PNK_APRICORN = 0x65
BLACKGLASSES = 0x66
SLOWPOKETAIL = 0x67
PINK_BOW = 0x68
STICK = 0x69
SMOKE_BALL = 0x6a
NEVERMELTICE = 0x6b
MAGNET = 0x6c
MIRACLEBERRY = 0x6d
PEARL = 0x6e
BIG_PEARL = 0x6f
EVERSTONE = 0x70
SPELL_TAG = 0x71
RAGECANDYBAR = 0x72
GS_BALL = 0x73
BLUE_CARD = 0x74
MIRACLE_SEED = 0x75
THICK_CLUB = 0x76
FOCUS_BAND = 0x77
ITEM_78 = 0x78
ENERGYPOWDER = 0x79
ENERGY_ROOT = 0x7a
HEAL_POWDER = 0x7b
REVIVAL_HERB = 0x7c
HARD_STONE = 0x7d
LUCKY_EGG = 0x7e
CARD_KEY = 0x7f
MACHINE_PART = 0x80
EGG_TICKET = 0x81
LOST_ITEM = 0x82
STARDUST = 0x83
STAR_PIECE = 0x84
BASEMENT_KEY = 0x85
PASS = 0x86
ITEM_87 = 0x87
ITEM_88 = 0x88
ITEM_89 = 0x89
CHARCOAL = 0x8a
BERRY_JUICE = 0x8b
SCOPE_LENS = 0x8c
ITEM_8D = 0x8d
ITEM_8E = 0x8e
METAL_COAT = 0x8f
DRAGON_FANG = 0x90
ITEM_91 = 0x91
LEFTOVERS = 0x92
ITEM_93 = 0x93
ITEM_94 = 0x94
ITEM_95 = 0x95
MYSTERYBERRY = 0x96
DRAGON_SCALE = 0x97
BERSERK_GENE = 0x98
ITEM_99 = 0x99
ITEM_9A = 0x9a
ITEM_9B = 0x9b
SACRED_ASH = 0x9c
HEAVY_BALL = 0x9d
FLOWER_MAIL = 0x9e
LEVEL_BALL = 0x9f
LURE_BALL = 0xa0
FAST_BALL = 0xa1
ITEM_A2 = 0xa2
LIGHT_BALL = 0xa3
FRIEND_BALL = 0xa4
MOON_BALL = 0xa5
LOVE_BALL = 0xa6
NORMAL_BOX = 0xa7
GORGEOUS_BOX = 0xa8
SUN_STONE = 0xa9
POLKADOT_BOW = 0xaa
ITEM_AB = 0xab
UP_GRADE = 0xac
BERRY = 0xad
GOLD_BERRY = 0xae
SQUIRTBOTTLE = 0xaf
ITEM_B0 = 0xb0
PARK_BALL = 0xb1
RAINBOW_WING = 0xb2
ITEM_B3 = 0xb3
BRICK_PIECE = 0xb4
SURF_MAIL = 0xb5
LITEBLUEMAIL = 0xb6
PORTRAITMAIL = 0xb7
LOVELY_MAIL = 0xb8
EON_MAIL = 0xb9
MORPH_MAIL = 0xba
BLUESKY_MAIL = 0xbb
MUSIC_MAIL = 0xbc
MIRAGE_MAIL = 0xbd
ITEM_BE = 0xbe
ITEM_CONSTS = {"NO_ITEM": 0x00,"MASTER_BALL": 0x01,"ULTRA_BALL": 0x02,"BRIGHTPOWDER": 0x03,"GREAT_BALL": 0x04,"POKE_BALL": 0x05,"TOWN_MAP": 0x06,"BICYCLE": 0x07,"MOON_STONE": 0x08,"ANTIDOTE": 0x09,"BURN_HEAL": 0x0a,"ICE_HEAL": 0x0b,"AWAKENING": 0x0c,"PARLYZ_HEAL": 0x0d,"FULL_RESTORE": 0x0e,"MAX_POTION": 0x0f,"HYPER_POTION": 0x10,"SUPER_POTION": 0x11,"POTION": 0x12,"ESCAPE_ROPE": 0x13,"REPEL": 0x14,"MAX_ELIXER": 0x15,"FIRE_STONE": 0x16,"THUNDERSTONE": 0x17,"WATER_STONE": 0x18,"ITEM_19": 0x19,"HP_UP": 0x1a,"PROTEIN": 0x1b,"IRON": 0x1c,"CARBOS": 0x1d,"LUCKY_PUNCH": 0x1e,"CALCIUM": 0x1f,"RARE_CANDY": 0x20,"X_ACCURACY": 0x21,"LEAF_STONE": 0x22,"METAL_POWDER": 0x23,"NUGGET": 0x24,"POKE_DOLL": 0x25,"FULL_HEAL": 0x26,"REVIVE": 0x27,"MAX_REVIVE": 0x28,"GUARD_SPEC": 0x29,"SUPER_REPEL": 0x2a,"MAX_REPEL": 0x2b,"DIRE_HIT": 0x2c,"ITEM_2D": 0x2d,"FRESH_WATER": 0x2e,"SODA_POP": 0x2f,"LEMONADE": 0x30,"X_ATTACK": 0x31,"ITEM_32": 0x32,"X_DEFEND": 0x33,"X_SPEED": 0x34,"X_SPECIAL": 0x35,"COIN_CASE": 0x36,"ITEMFINDER": 0x37,"POKE_FLUTE": 0x38,"EXP_SHARE": 0x39,"OLD_ROD": 0x3a,"GOOD_ROD": 0x3b,"SILVER_LEAF": 0x3c,"SUPER_ROD": 0x3d,"PP_UP": 0x3e,"ETHER": 0x3f,"MAX_ETHER": 0x40,"ELIXER": 0x41,"RED_SCALE": 0x42,"SECRETPOTION": 0x43,"S_S_TICKET": 0x44,"MYSTERY_EGG": 0x45,"CLEAR_BELL": 0x46,"SILVER_WING": 0x47,"MOOMOO_MILK": 0x48,"QUICK_CLAW": 0x49,"PSNCUREBERRY": 0x4a,"GOLD_LEAF": 0x4b,"SOFT_SAND": 0x4c,"SHARP_BEAK": 0x4d,"PRZCUREBERRY": 0x4e,"BURNT_BERRY": 0x4f,"ICE_BERRY": 0x50,"POISON_BARB": 0x51,"KINGS_ROCK": 0x52,"BITTER_BERRY": 0x53,"MINT_BERRY": 0x54,"RED_APRICORN": 0x55,"TINYMUSHROOM": 0x56,"BIG_MUSHROOM": 0x57,"SILVERPOWDER": 0x58,"BLU_APRICORN": 0x59,"ITEM_5A": 0x5a,"AMULET_COIN": 0x5b,"YLW_APRICORN": 0x5c,"GRN_APRICORN": 0x5d,"CLEANSE_TAG": 0x5e,"MYSTIC_WATER": 0x5f,"TWISTEDSPOON": 0x60,"WHT_APRICORN": 0x61,"BLACKBELT": 0x62,"BLK_APRICORN": 0x63,"ITEM_64": 0x64,"PNK_APRICORN": 0x65,"BLACKGLASSES": 0x66,"SLOWPOKETAIL": 0x67,"PINK_BOW": 0x68,"STICK": 0x69,"SMOKE_BALL": 0x6a,"NEVERMELTICE": 0x6b,"MAGNET": 0x6c,"MIRACLEBERRY": 0x6d,"PEARL": 0x6e,"BIG_PEARL": 0x6f,"EVERSTONE": 0x70,"SPELL_TAG": 0x71,"RAGECANDYBAR": 0x72,"GS_BALL": 0x73,"BLUE_CARD": 0x74,"MIRACLE_SEED": 0x75,"THICK_CLUB": 0x76,"FOCUS_BAND": 0x77,"ITEM_78": 0x78,"ENERGYPOWDER": 0x79,"ENERGY_ROOT": 0x7a,"HEAL_POWDER": 0x7b,"REVIVAL_HERB": 0x7c,"HARD_STONE": 0x7d,"LUCKY_EGG": 0x7e,"CARD_KEY": 0x7f,"MACHINE_PART": 0x80,"EGG_TICKET": 0x81,"LOST_ITEM": 0x82,"STARDUST": 0x83,"STAR_PIECE": 0x84,"BASEMENT_KEY": 0x85,"PASS": 0x86,"ITEM_87": 0x87,"ITEM_88": 0x88,"ITEM_89": 0x89,"CHARCOAL": 0x8a,"BERRY_JUICE": 0x8b,"SCOPE_LENS": 0x8c,"ITEM_8D": 0x8d,"ITEM_8E": 0x8e,"METAL_COAT": 0x8f,"DRAGON_FANG": 0x90,"ITEM_91": 0x91,"LEFTOVERS": 0x92,"ITEM_93": 0x93,"ITEM_94": 0x94,"ITEM_95": 0x95,"MYSTERYBERRY": 0x96,"DRAGON_SCALE": 0x97,"BERSERK_GENE": 0x98,"ITEM_99": 0x99,"ITEM_9A": 0x9a,"ITEM_9B": 0x9b,"SACRED_ASH": 0x9c,"HEAVY_BALL": 0x9d,"FLOWER_MAIL": 0x9e,"LEVEL_BALL": 0x9f,"LURE_BALL": 0xa0,"FAST_BALL": 0xa1,"ITEM_A2": 0xa2,"LIGHT_BALL": 0xa3,"FRIEND_BALL": 0xa4,"MOON_BALL": 0xa5,"LOVE_BALL": 0xa6,"NORMAL_BOX": 0xa7,"GORGEOUS_BOX": 0xa8,"SUN_STONE": 0xa9,"POLKADOT_BOW": 0xaa,"ITEM_AB": 0xab,"UP_GRADE": 0xac,"BERRY": 0xad,"GOLD_BERRY": 0xae,"SQUIRTBOTTLE": 0xaf,"ITEM_B0": 0xb0,"PARK_BALL": 0xb1,"RAINBOW_WING": 0xb2,"ITEM_B3": 0xb3,"BRICK_PIECE": 0xb4,"SURF_MAIL": 0xb5,"LITEBLUEMAIL": 0xb6,"PORTRAITMAIL": 0xb7,"LOVELY_MAIL": 0xb8,"EON_MAIL": 0xb9,"MORPH_MAIL": 0xba,"BLUESKY_MAIL": 0xbb,"MUSIC_MAIL": 0xbc,"MIRAGE_MAIL": 0xbd,"ITEM_BE": 0xbe}
ITEM_CONSTS_REV = {x: y for y, x in ITEM_CONSTS.items()} | 38 | 3,531 | 0.698111 | 1,076 | 7,410 | 4.526952 | 0.388476 | 0.006159 | 0.004106 | 0.006569 | 0.994457 | 0.994457 | 0.994457 | 0.994457 | 0.994457 | 0.994457 | 0 | 0.164223 | 0.17166 | 7,410 | 195 | 3,532 | 38 | 0.629358 | 0 | 0 | 0 | 0 | 0 | 0.242477 | 0 | 0 | 0 | 0.20618 | 0 | 0 | 1 | 0 | false | 0.010363 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
16a9c498212af8c93b3b77ed55cd9641a7b20ef4 | 194 | py | Python | Clarinet/evaluation/__init__.py | rohans0509/Clarinet | 0a7a6a5e6a91f93956b6b5739cab1f030655cac8 | [
"MIT"
] | 1 | 2022-01-28T20:30:07.000Z | 2022-01-28T20:30:07.000Z | Clarinet/evaluation/__init__.py | rohans0509/Clarinet | 0a7a6a5e6a91f93956b6b5739cab1f030655cac8 | [
"MIT"
] | null | null | null | Clarinet/evaluation/__init__.py | rohans0509/Clarinet | 0a7a6a5e6a91f93956b6b5739cab1f030655cac8 | [
"MIT"
] | 2 | 2021-11-23T13:55:10.000Z | 2021-11-23T13:56:57.000Z | from Clarinet.evaluation.analyse import analyse
from Clarinet.evaluation.evaluate import evaluate
from Clarinet.evaluation.compile import compile
from Clarinet.evaluation.trends import trends
| 27.714286 | 49 | 0.865979 | 24 | 194 | 7 | 0.333333 | 0.285714 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092784 | 194 | 6 | 50 | 32.333333 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
16cd8ec2a1983ce1e305cee7224601e2adce9a6f | 123 | py | Python | test/test_utils.py | Dorozhko-Anton/best_practice_package | 63b5315922e578f75d8ab69b77fa7c7aa64b393b | [
"MIT"
] | null | null | null | test/test_utils.py | Dorozhko-Anton/best_practice_package | 63b5315922e578f75d8ab69b77fa7c7aa64b393b | [
"MIT"
] | null | null | null | test/test_utils.py | Dorozhko-Anton/best_practice_package | 63b5315922e578f75d8ab69b77fa7c7aa64b393b | [
"MIT"
] | null | null | null | from adossproject.utils import usefull_function
def test_usefull_function():
assert usefull_function(True, 2) is False | 30.75 | 47 | 0.821138 | 17 | 123 | 5.705882 | 0.764706 | 0.463918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 0.121951 | 123 | 4 | 48 | 30.75 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
16dd6d33a8383ac3ac327f74acb06a2433dcd3db | 145 | py | Python | bot/conf/__init__.py | kaulketh/greenhouse | e8c78aac61a9d8ede30ce5755a60b1a461947bc3 | [
"Unlicense"
] | 8 | 2019-04-27T04:01:58.000Z | 2021-02-16T02:19:52.000Z | bot/conf/__init__.py | kaulketh/greenhouse | e8c78aac61a9d8ede30ce5755a60b1a461947bc3 | [
"Unlicense"
] | 21 | 2018-11-29T12:30:10.000Z | 2019-04-25T19:00:00.000Z | bot/conf/__init__.py | kaulketh/greenhouse | e8c78aac61a9d8ede30ce5755a60b1a461947bc3 | [
"Unlicense"
] | null | null | null | from .lib_german import *
from .lib_english import *
from .lib_ext_greenhouse import *
from .greenhouse_config import *
from .lib_global import * | 29 | 33 | 0.8 | 21 | 145 | 5.238095 | 0.428571 | 0.254545 | 0.354545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131034 | 145 | 5 | 34 | 29 | 0.873016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bc61b7f9f0d63733175a33489cb0011929646285 | 161,881 | py | Python | sdk/python/pulumi_azure/compute/windows_virtual_machine_scale_set.py | ScriptBox99/pulumi-azure | 1b8c6d5479ccabc39094741eac25a8ca44c8833a | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/compute/windows_virtual_machine_scale_set.py | ScriptBox99/pulumi-azure | 1b8c6d5479ccabc39094741eac25a8ca44c8833a | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/compute/windows_virtual_machine_scale_set.py | ScriptBox99/pulumi-azure | 1b8c6d5479ccabc39094741eac25a8ca44c8833a | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['WindowsVirtualMachineScaleSetArgs', 'WindowsVirtualMachineScaleSet']
@pulumi.input_type
class WindowsVirtualMachineScaleSetArgs:
def __init__(__self__, *,
admin_password: pulumi.Input[str],
admin_username: pulumi.Input[str],
instances: pulumi.Input[int],
network_interfaces: pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]],
os_disk: pulumi.Input['WindowsVirtualMachineScaleSetOsDiskArgs'],
resource_group_name: pulumi.Input[str],
sku: pulumi.Input[str],
additional_capabilities: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs']] = None,
additional_unattend_contents: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]]] = None,
automatic_instance_repair: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs']] = None,
automatic_os_upgrade_policy: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs']] = None,
boot_diagnostics: Optional[pulumi.Input['WindowsVirtualMachineScaleSetBootDiagnosticsArgs']] = None,
computer_name_prefix: Optional[pulumi.Input[str]] = None,
custom_data: Optional[pulumi.Input[str]] = None,
data_disks: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetDataDiskArgs']]]] = None,
do_not_run_extensions_on_overprovisioned_machines: Optional[pulumi.Input[bool]] = None,
enable_automatic_updates: Optional[pulumi.Input[bool]] = None,
encryption_at_host_enabled: Optional[pulumi.Input[bool]] = None,
eviction_policy: Optional[pulumi.Input[str]] = None,
extensions: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetExtensionArgs']]]] = None,
extensions_time_budget: Optional[pulumi.Input[str]] = None,
health_probe_id: Optional[pulumi.Input[str]] = None,
identity: Optional[pulumi.Input['WindowsVirtualMachineScaleSetIdentityArgs']] = None,
license_type: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
max_bid_price: Optional[pulumi.Input[float]] = None,
name: Optional[pulumi.Input[str]] = None,
overprovision: Optional[pulumi.Input[bool]] = None,
plan: Optional[pulumi.Input['WindowsVirtualMachineScaleSetPlanArgs']] = None,
platform_fault_domain_count: Optional[pulumi.Input[int]] = None,
priority: Optional[pulumi.Input[str]] = None,
provision_vm_agent: Optional[pulumi.Input[bool]] = None,
proximity_placement_group_id: Optional[pulumi.Input[str]] = None,
rolling_upgrade_policy: Optional[pulumi.Input['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs']] = None,
scale_in_policy: Optional[pulumi.Input[str]] = None,
secrets: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetSecretArgs']]]] = None,
secure_boot_enabled: Optional[pulumi.Input[bool]] = None,
single_placement_group: Optional[pulumi.Input[bool]] = None,
source_image_id: Optional[pulumi.Input[str]] = None,
source_image_reference: Optional[pulumi.Input['WindowsVirtualMachineScaleSetSourceImageReferenceArgs']] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
terminate_notification: Optional[pulumi.Input['WindowsVirtualMachineScaleSetTerminateNotificationArgs']] = None,
timezone: Optional[pulumi.Input[str]] = None,
upgrade_mode: Optional[pulumi.Input[str]] = None,
user_data: Optional[pulumi.Input[str]] = None,
vtpm_enabled: Optional[pulumi.Input[bool]] = None,
winrm_listeners: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetWinrmListenerArgs']]]] = None,
zone_balance: Optional[pulumi.Input[bool]] = None,
zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a WindowsVirtualMachineScaleSet resource.
:param pulumi.Input[str] admin_password: The Password which should be used for the local-administrator on this Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[str] admin_username: The username of the local administrator on each Virtual Machine Scale Set instance. Changing this forces a new resource to be created.
:param pulumi.Input[int] instances: The number of Virtual Machines in the Scale Set.
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]] network_interfaces: One or more `network_interface` blocks as defined below.
:param pulumi.Input['WindowsVirtualMachineScaleSetOsDiskArgs'] os_disk: An `os_disk` block as defined below.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group in which the Windows Virtual Machine Scale Set should be exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] sku: The Virtual Machine SKU for the Scale Set, such as `Standard_F2`.
:param pulumi.Input['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs'] additional_capabilities: A `additional_capabilities` block as defined below.
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]] additional_unattend_contents: One or more `additional_unattend_content` blocks as defined below.
:param pulumi.Input['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs'] automatic_instance_repair: A `automatic_instance_repair` block as defined below. To enable the automatic instance repair, this Virtual Machine Scale Set must have a valid `health_probe_id` or an [Application Health Extension](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension).
:param pulumi.Input['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs'] automatic_os_upgrade_policy: A `automatic_os_upgrade_policy` block as defined below. This can only be specified when `upgrade_mode` is set to `Automatic`.
:param pulumi.Input['WindowsVirtualMachineScaleSetBootDiagnosticsArgs'] boot_diagnostics: A `boot_diagnostics` block as defined below.
:param pulumi.Input[str] computer_name_prefix: The prefix which should be used for the name of the Virtual Machines in this Scale Set. If unspecified this defaults to the value for the `name` field. If the value of the `name` field is not a valid `computer_name_prefix`, then you must specify `computer_name_prefix`.
:param pulumi.Input[str] custom_data: The Base64-Encoded Custom Data which should be used for this Virtual Machine Scale Set.
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetDataDiskArgs']]] data_disks: One or more `data_disk` blocks as defined below.
:param pulumi.Input[bool] do_not_run_extensions_on_overprovisioned_machines: Should Virtual Machine Extensions be run on Overprovisioned Virtual Machines in the Scale Set? Defaults to `false`.
:param pulumi.Input[bool] enable_automatic_updates: Are automatic updates enabled for this Virtual Machine? Defaults to `true`.
:param pulumi.Input[bool] encryption_at_host_enabled: Should all of the disks (including the temp disk) attached to this Virtual Machine be encrypted by enabling Encryption at Host?
:param pulumi.Input[str] eviction_policy: The Policy which should be used Virtual Machines are Evicted from the Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetExtensionArgs']]] extensions: One or more `extension` blocks as defined below
:param pulumi.Input[str] extensions_time_budget: Specifies the duration allocated for all extensions to start. The time duration should be between `15` minutes and `120` minutes (inclusive) and should be specified in ISO 8601 format. Defaults to `90` minutes (`PT1H30M`).
:param pulumi.Input[str] health_probe_id: The ID of a Load Balancer Probe which should be used to determine the health of an instance. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
:param pulumi.Input['WindowsVirtualMachineScaleSetIdentityArgs'] identity: An `identity` block as defined below.
:param pulumi.Input[str] license_type: Specifies the type of on-premise license (also known as [Azure Hybrid Use Benefit](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing)) which should be used for this Virtual Machine Scale Set. Possible values are `None`, `Windows_Client` and `Windows_Server`.
:param pulumi.Input[str] location: The Azure location where the Windows Virtual Machine Scale Set should exist. Changing this forces a new resource to be created.
:param pulumi.Input[float] max_bid_price: The maximum price you're willing to pay for each Virtual Machine in this Scale Set, in US Dollars; which must be greater than the current spot price. If this bid price falls below the current spot price the Virtual Machines in the Scale Set will be evicted using the `eviction_policy`. Defaults to `-1`, which means that each Virtual Machine in the Scale Set should not be evicted for price reasons.
:param pulumi.Input[str] name: The name of the Windows Virtual Machine Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[bool] overprovision: Should Azure over-provision Virtual Machines in this Scale Set? This means that multiple Virtual Machines will be provisioned and Azure will keep the instances which become available first - which improves provisioning success rates and improves deployment time. You're not billed for these over-provisioned VM's and they don't count towards the Subscription Quota. Defaults to `true`.
:param pulumi.Input['WindowsVirtualMachineScaleSetPlanArgs'] plan: A `plan` block as documented below.
:param pulumi.Input[int] platform_fault_domain_count: Specifies the number of fault domains that are used by this Linux Virtual Machine Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[str] priority: The Priority of this Virtual Machine Scale Set. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this value forces a new resource.
:param pulumi.Input[bool] provision_vm_agent: Should the Azure VM Agent be provisioned on each Virtual Machine in the Scale Set? Defaults to `true`. Changing this value forces a new resource to be created.
:param pulumi.Input[str] proximity_placement_group_id: The ID of the Proximity Placement Group in which the Virtual Machine Scale Set should be assigned to. Changing this forces a new resource to be created.
:param pulumi.Input['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs'] rolling_upgrade_policy: A `rolling_upgrade_policy` block as defined below. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
:param pulumi.Input[str] scale_in_policy: The scale-in policy rule that decides which virtual machines are chosen for removal when a Virtual Machine Scale Set is scaled in. Possible values for the scale-in policy rules are `Default`, `NewestVM` and `OldestVM`, defaults to `Default`. For more information about scale in policy, please [refer to this doc](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy).
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetSecretArgs']]] secrets: One or more `secret` blocks as defined below.
:param pulumi.Input[bool] secure_boot_enabled: Specifies if Secure Boot and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[bool] single_placement_group: Should this Virtual Machine Scale Set be limited to a Single Placement Group, which means the number of instances will be capped at 100 Virtual Machines. Defaults to `true`.
:param pulumi.Input[str] source_image_id: The ID of an Image which each Virtual Machine in this Scale Set should be based on.
:param pulumi.Input['WindowsVirtualMachineScaleSetSourceImageReferenceArgs'] source_image_reference: A `source_image_reference` block as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags which should be assigned to this Virtual Machine Scale Set.
:param pulumi.Input['WindowsVirtualMachineScaleSetTerminateNotificationArgs'] terminate_notification: A `terminate_notification` block as defined below.
:param pulumi.Input[str] timezone: Specifies the time zone of the virtual machine, [the possible values are defined here](https://jackstromberg.com/2017/01/list-of-time-zones-consumed-by-azure/).
:param pulumi.Input[str] upgrade_mode: Specifies how Upgrades (e.g. changing the Image/SKU) should be performed to Virtual Machine Instances. Possible values are `Automatic`, `Manual` and `Rolling`. Defaults to `Manual`.
:param pulumi.Input[str] user_data: The Base64-Encoded User Data which should be used for this Virtual Machine Scale Set.
:param pulumi.Input[bool] vtpm_enabled: Specifies if vTPM (Virtual Trusted Plaform Module) and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetWinrmListenerArgs']]] winrm_listeners: One or more `winrm_listener` blocks as defined below.
:param pulumi.Input[bool] zone_balance: Should the Virtual Machines in this Scale Set be strictly evenly distributed across Availability Zones? Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[str]]] zones: A list of Availability Zones in which the Virtual Machines in this Scale Set should be created in. Changing this forces a new resource to be created.
"""
pulumi.set(__self__, "admin_password", admin_password)
pulumi.set(__self__, "admin_username", admin_username)
pulumi.set(__self__, "instances", instances)
pulumi.set(__self__, "network_interfaces", network_interfaces)
pulumi.set(__self__, "os_disk", os_disk)
pulumi.set(__self__, "resource_group_name", resource_group_name)
pulumi.set(__self__, "sku", sku)
if additional_capabilities is not None:
pulumi.set(__self__, "additional_capabilities", additional_capabilities)
if additional_unattend_contents is not None:
pulumi.set(__self__, "additional_unattend_contents", additional_unattend_contents)
if automatic_instance_repair is not None:
pulumi.set(__self__, "automatic_instance_repair", automatic_instance_repair)
if automatic_os_upgrade_policy is not None:
pulumi.set(__self__, "automatic_os_upgrade_policy", automatic_os_upgrade_policy)
if boot_diagnostics is not None:
pulumi.set(__self__, "boot_diagnostics", boot_diagnostics)
if computer_name_prefix is not None:
pulumi.set(__self__, "computer_name_prefix", computer_name_prefix)
if custom_data is not None:
pulumi.set(__self__, "custom_data", custom_data)
if data_disks is not None:
pulumi.set(__self__, "data_disks", data_disks)
if do_not_run_extensions_on_overprovisioned_machines is not None:
pulumi.set(__self__, "do_not_run_extensions_on_overprovisioned_machines", do_not_run_extensions_on_overprovisioned_machines)
if enable_automatic_updates is not None:
pulumi.set(__self__, "enable_automatic_updates", enable_automatic_updates)
if encryption_at_host_enabled is not None:
pulumi.set(__self__, "encryption_at_host_enabled", encryption_at_host_enabled)
if eviction_policy is not None:
pulumi.set(__self__, "eviction_policy", eviction_policy)
if extensions is not None:
pulumi.set(__self__, "extensions", extensions)
if extensions_time_budget is not None:
pulumi.set(__self__, "extensions_time_budget", extensions_time_budget)
if health_probe_id is not None:
pulumi.set(__self__, "health_probe_id", health_probe_id)
if identity is not None:
pulumi.set(__self__, "identity", identity)
if license_type is not None:
pulumi.set(__self__, "license_type", license_type)
if location is not None:
pulumi.set(__self__, "location", location)
if max_bid_price is not None:
pulumi.set(__self__, "max_bid_price", max_bid_price)
if name is not None:
pulumi.set(__self__, "name", name)
if overprovision is not None:
pulumi.set(__self__, "overprovision", overprovision)
if plan is not None:
pulumi.set(__self__, "plan", plan)
if platform_fault_domain_count is not None:
pulumi.set(__self__, "platform_fault_domain_count", platform_fault_domain_count)
if priority is not None:
pulumi.set(__self__, "priority", priority)
if provision_vm_agent is not None:
pulumi.set(__self__, "provision_vm_agent", provision_vm_agent)
if proximity_placement_group_id is not None:
pulumi.set(__self__, "proximity_placement_group_id", proximity_placement_group_id)
if rolling_upgrade_policy is not None:
pulumi.set(__self__, "rolling_upgrade_policy", rolling_upgrade_policy)
if scale_in_policy is not None:
pulumi.set(__self__, "scale_in_policy", scale_in_policy)
if secrets is not None:
pulumi.set(__self__, "secrets", secrets)
if secure_boot_enabled is not None:
pulumi.set(__self__, "secure_boot_enabled", secure_boot_enabled)
if single_placement_group is not None:
pulumi.set(__self__, "single_placement_group", single_placement_group)
if source_image_id is not None:
pulumi.set(__self__, "source_image_id", source_image_id)
if source_image_reference is not None:
pulumi.set(__self__, "source_image_reference", source_image_reference)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if terminate_notification is not None:
pulumi.set(__self__, "terminate_notification", terminate_notification)
if timezone is not None:
pulumi.set(__self__, "timezone", timezone)
if upgrade_mode is not None:
pulumi.set(__self__, "upgrade_mode", upgrade_mode)
if user_data is not None:
pulumi.set(__self__, "user_data", user_data)
if vtpm_enabled is not None:
pulumi.set(__self__, "vtpm_enabled", vtpm_enabled)
if winrm_listeners is not None:
pulumi.set(__self__, "winrm_listeners", winrm_listeners)
if zone_balance is not None:
pulumi.set(__self__, "zone_balance", zone_balance)
if zones is not None:
pulumi.set(__self__, "zones", zones)
@property
@pulumi.getter(name="adminPassword")
def admin_password(self) -> pulumi.Input[str]:
"""
The Password which should be used for the local-administrator on this Virtual Machine. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "admin_password")
@admin_password.setter
def admin_password(self, value: pulumi.Input[str]):
pulumi.set(self, "admin_password", value)
@property
@pulumi.getter(name="adminUsername")
def admin_username(self) -> pulumi.Input[str]:
"""
The username of the local administrator on each Virtual Machine Scale Set instance. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "admin_username")
@admin_username.setter
def admin_username(self, value: pulumi.Input[str]):
pulumi.set(self, "admin_username", value)
@property
@pulumi.getter
def instances(self) -> pulumi.Input[int]:
"""
The number of Virtual Machines in the Scale Set.
"""
return pulumi.get(self, "instances")
@instances.setter
def instances(self, value: pulumi.Input[int]):
pulumi.set(self, "instances", value)
@property
@pulumi.getter(name="networkInterfaces")
def network_interfaces(self) -> pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]]:
"""
One or more `network_interface` blocks as defined below.
"""
return pulumi.get(self, "network_interfaces")
@network_interfaces.setter
def network_interfaces(self, value: pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]]):
pulumi.set(self, "network_interfaces", value)
@property
@pulumi.getter(name="osDisk")
def os_disk(self) -> pulumi.Input['WindowsVirtualMachineScaleSetOsDiskArgs']:
"""
An `os_disk` block as defined below.
"""
return pulumi.get(self, "os_disk")
@os_disk.setter
def os_disk(self, value: pulumi.Input['WindowsVirtualMachineScaleSetOsDiskArgs']):
pulumi.set(self, "os_disk", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Input[str]:
"""
The name of the Resource Group in which the Windows Virtual Machine Scale Set should be exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter
def sku(self) -> pulumi.Input[str]:
"""
The Virtual Machine SKU for the Scale Set, such as `Standard_F2`.
"""
return pulumi.get(self, "sku")
@sku.setter
def sku(self, value: pulumi.Input[str]):
pulumi.set(self, "sku", value)
@property
@pulumi.getter(name="additionalCapabilities")
def additional_capabilities(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs']]:
"""
A `additional_capabilities` block as defined below.
"""
return pulumi.get(self, "additional_capabilities")
@additional_capabilities.setter
def additional_capabilities(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs']]):
pulumi.set(self, "additional_capabilities", value)
@property
@pulumi.getter(name="additionalUnattendContents")
def additional_unattend_contents(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]]]:
"""
One or more `additional_unattend_content` blocks as defined below.
"""
return pulumi.get(self, "additional_unattend_contents")
@additional_unattend_contents.setter
def additional_unattend_contents(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]]]):
pulumi.set(self, "additional_unattend_contents", value)
@property
@pulumi.getter(name="automaticInstanceRepair")
def automatic_instance_repair(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs']]:
"""
A `automatic_instance_repair` block as defined below. To enable the automatic instance repair, this Virtual Machine Scale Set must have a valid `health_probe_id` or an [Application Health Extension](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension).
"""
return pulumi.get(self, "automatic_instance_repair")
@automatic_instance_repair.setter
def automatic_instance_repair(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs']]):
pulumi.set(self, "automatic_instance_repair", value)
@property
@pulumi.getter(name="automaticOsUpgradePolicy")
def automatic_os_upgrade_policy(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs']]:
"""
A `automatic_os_upgrade_policy` block as defined below. This can only be specified when `upgrade_mode` is set to `Automatic`.
"""
return pulumi.get(self, "automatic_os_upgrade_policy")
@automatic_os_upgrade_policy.setter
def automatic_os_upgrade_policy(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs']]):
pulumi.set(self, "automatic_os_upgrade_policy", value)
@property
@pulumi.getter(name="bootDiagnostics")
def boot_diagnostics(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetBootDiagnosticsArgs']]:
"""
A `boot_diagnostics` block as defined below.
"""
return pulumi.get(self, "boot_diagnostics")
@boot_diagnostics.setter
def boot_diagnostics(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetBootDiagnosticsArgs']]):
pulumi.set(self, "boot_diagnostics", value)
@property
@pulumi.getter(name="computerNamePrefix")
def computer_name_prefix(self) -> Optional[pulumi.Input[str]]:
"""
The prefix which should be used for the name of the Virtual Machines in this Scale Set. If unspecified this defaults to the value for the `name` field. If the value of the `name` field is not a valid `computer_name_prefix`, then you must specify `computer_name_prefix`.
"""
return pulumi.get(self, "computer_name_prefix")
@computer_name_prefix.setter
def computer_name_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "computer_name_prefix", value)
@property
@pulumi.getter(name="customData")
def custom_data(self) -> Optional[pulumi.Input[str]]:
"""
The Base64-Encoded Custom Data which should be used for this Virtual Machine Scale Set.
"""
return pulumi.get(self, "custom_data")
@custom_data.setter
def custom_data(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "custom_data", value)
@property
@pulumi.getter(name="dataDisks")
def data_disks(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetDataDiskArgs']]]]:
"""
One or more `data_disk` blocks as defined below.
"""
return pulumi.get(self, "data_disks")
@data_disks.setter
def data_disks(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetDataDiskArgs']]]]):
pulumi.set(self, "data_disks", value)
@property
@pulumi.getter(name="doNotRunExtensionsOnOverprovisionedMachines")
def do_not_run_extensions_on_overprovisioned_machines(self) -> Optional[pulumi.Input[bool]]:
"""
Should Virtual Machine Extensions be run on Overprovisioned Virtual Machines in the Scale Set? Defaults to `false`.
"""
return pulumi.get(self, "do_not_run_extensions_on_overprovisioned_machines")
@do_not_run_extensions_on_overprovisioned_machines.setter
def do_not_run_extensions_on_overprovisioned_machines(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "do_not_run_extensions_on_overprovisioned_machines", value)
@property
@pulumi.getter(name="enableAutomaticUpdates")
def enable_automatic_updates(self) -> Optional[pulumi.Input[bool]]:
"""
Are automatic updates enabled for this Virtual Machine? Defaults to `true`.
"""
return pulumi.get(self, "enable_automatic_updates")
@enable_automatic_updates.setter
def enable_automatic_updates(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_automatic_updates", value)
@property
@pulumi.getter(name="encryptionAtHostEnabled")
def encryption_at_host_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Should all of the disks (including the temp disk) attached to this Virtual Machine be encrypted by enabling Encryption at Host?
"""
return pulumi.get(self, "encryption_at_host_enabled")
@encryption_at_host_enabled.setter
def encryption_at_host_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "encryption_at_host_enabled", value)
@property
@pulumi.getter(name="evictionPolicy")
def eviction_policy(self) -> Optional[pulumi.Input[str]]:
"""
The Policy which should be used Virtual Machines are Evicted from the Scale Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "eviction_policy")
@eviction_policy.setter
def eviction_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eviction_policy", value)
@property
@pulumi.getter
def extensions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetExtensionArgs']]]]:
"""
One or more `extension` blocks as defined below
"""
return pulumi.get(self, "extensions")
@extensions.setter
def extensions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetExtensionArgs']]]]):
pulumi.set(self, "extensions", value)
@property
@pulumi.getter(name="extensionsTimeBudget")
def extensions_time_budget(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the duration allocated for all extensions to start. The time duration should be between `15` minutes and `120` minutes (inclusive) and should be specified in ISO 8601 format. Defaults to `90` minutes (`PT1H30M`).
"""
return pulumi.get(self, "extensions_time_budget")
@extensions_time_budget.setter
def extensions_time_budget(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "extensions_time_budget", value)
@property
@pulumi.getter(name="healthProbeId")
def health_probe_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of a Load Balancer Probe which should be used to determine the health of an instance. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
"""
return pulumi.get(self, "health_probe_id")
@health_probe_id.setter
def health_probe_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "health_probe_id", value)
@property
@pulumi.getter
def identity(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetIdentityArgs']]:
"""
An `identity` block as defined below.
"""
return pulumi.get(self, "identity")
@identity.setter
def identity(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetIdentityArgs']]):
pulumi.set(self, "identity", value)
@property
@pulumi.getter(name="licenseType")
def license_type(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the type of on-premise license (also known as [Azure Hybrid Use Benefit](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing)) which should be used for this Virtual Machine Scale Set. Possible values are `None`, `Windows_Client` and `Windows_Server`.
"""
return pulumi.get(self, "license_type")
@license_type.setter
def license_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "license_type", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
The Azure location where the Windows Virtual Machine Scale Set should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter(name="maxBidPrice")
def max_bid_price(self) -> Optional[pulumi.Input[float]]:
"""
The maximum price you're willing to pay for each Virtual Machine in this Scale Set, in US Dollars; which must be greater than the current spot price. If this bid price falls below the current spot price the Virtual Machines in the Scale Set will be evicted using the `eviction_policy`. Defaults to `-1`, which means that each Virtual Machine in the Scale Set should not be evicted for price reasons.
"""
return pulumi.get(self, "max_bid_price")
@max_bid_price.setter
def max_bid_price(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "max_bid_price", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Windows Virtual Machine Scale Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def overprovision(self) -> Optional[pulumi.Input[bool]]:
"""
Should Azure over-provision Virtual Machines in this Scale Set? This means that multiple Virtual Machines will be provisioned and Azure will keep the instances which become available first - which improves provisioning success rates and improves deployment time. You're not billed for these over-provisioned VM's and they don't count towards the Subscription Quota. Defaults to `true`.
"""
return pulumi.get(self, "overprovision")
@overprovision.setter
def overprovision(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "overprovision", value)
@property
@pulumi.getter
def plan(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetPlanArgs']]:
"""
A `plan` block as documented below.
"""
return pulumi.get(self, "plan")
@plan.setter
def plan(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetPlanArgs']]):
pulumi.set(self, "plan", value)
@property
@pulumi.getter(name="platformFaultDomainCount")
def platform_fault_domain_count(self) -> Optional[pulumi.Input[int]]:
"""
Specifies the number of fault domains that are used by this Linux Virtual Machine Scale Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "platform_fault_domain_count")
@platform_fault_domain_count.setter
def platform_fault_domain_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "platform_fault_domain_count", value)
@property
@pulumi.getter
def priority(self) -> Optional[pulumi.Input[str]]:
"""
The Priority of this Virtual Machine Scale Set. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this value forces a new resource.
"""
return pulumi.get(self, "priority")
@priority.setter
def priority(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "priority", value)
@property
@pulumi.getter(name="provisionVmAgent")
def provision_vm_agent(self) -> Optional[pulumi.Input[bool]]:
"""
Should the Azure VM Agent be provisioned on each Virtual Machine in the Scale Set? Defaults to `true`. Changing this value forces a new resource to be created.
"""
return pulumi.get(self, "provision_vm_agent")
@provision_vm_agent.setter
def provision_vm_agent(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "provision_vm_agent", value)
@property
@pulumi.getter(name="proximityPlacementGroupId")
def proximity_placement_group_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Proximity Placement Group in which the Virtual Machine Scale Set should be assigned to. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "proximity_placement_group_id")
@proximity_placement_group_id.setter
def proximity_placement_group_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "proximity_placement_group_id", value)
@property
@pulumi.getter(name="rollingUpgradePolicy")
def rolling_upgrade_policy(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs']]:
"""
A `rolling_upgrade_policy` block as defined below. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
"""
return pulumi.get(self, "rolling_upgrade_policy")
@rolling_upgrade_policy.setter
def rolling_upgrade_policy(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs']]):
pulumi.set(self, "rolling_upgrade_policy", value)
@property
@pulumi.getter(name="scaleInPolicy")
def scale_in_policy(self) -> Optional[pulumi.Input[str]]:
"""
The scale-in policy rule that decides which virtual machines are chosen for removal when a Virtual Machine Scale Set is scaled in. Possible values for the scale-in policy rules are `Default`, `NewestVM` and `OldestVM`, defaults to `Default`. For more information about scale in policy, please [refer to this doc](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy).
"""
return pulumi.get(self, "scale_in_policy")
@scale_in_policy.setter
def scale_in_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scale_in_policy", value)
@property
@pulumi.getter
def secrets(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetSecretArgs']]]]:
"""
One or more `secret` blocks as defined below.
"""
return pulumi.get(self, "secrets")
@secrets.setter
def secrets(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetSecretArgs']]]]):
pulumi.set(self, "secrets", value)
@property
@pulumi.getter(name="secureBootEnabled")
def secure_boot_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies if Secure Boot and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "secure_boot_enabled")
@secure_boot_enabled.setter
def secure_boot_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "secure_boot_enabled", value)
@property
@pulumi.getter(name="singlePlacementGroup")
def single_placement_group(self) -> Optional[pulumi.Input[bool]]:
"""
Should this Virtual Machine Scale Set be limited to a Single Placement Group, which means the number of instances will be capped at 100 Virtual Machines. Defaults to `true`.
"""
return pulumi.get(self, "single_placement_group")
@single_placement_group.setter
def single_placement_group(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "single_placement_group", value)
@property
@pulumi.getter(name="sourceImageId")
def source_image_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of an Image which each Virtual Machine in this Scale Set should be based on.
"""
return pulumi.get(self, "source_image_id")
@source_image_id.setter
def source_image_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_image_id", value)
@property
@pulumi.getter(name="sourceImageReference")
def source_image_reference(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetSourceImageReferenceArgs']]:
"""
A `source_image_reference` block as defined below.
"""
return pulumi.get(self, "source_image_reference")
@source_image_reference.setter
def source_image_reference(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetSourceImageReferenceArgs']]):
pulumi.set(self, "source_image_reference", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags which should be assigned to this Virtual Machine Scale Set.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="terminateNotification")
def terminate_notification(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetTerminateNotificationArgs']]:
"""
A `terminate_notification` block as defined below.
"""
return pulumi.get(self, "terminate_notification")
@terminate_notification.setter
def terminate_notification(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetTerminateNotificationArgs']]):
pulumi.set(self, "terminate_notification", value)
@property
@pulumi.getter
def timezone(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the time zone of the virtual machine, [the possible values are defined here](https://jackstromberg.com/2017/01/list-of-time-zones-consumed-by-azure/).
"""
return pulumi.get(self, "timezone")
@timezone.setter
def timezone(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "timezone", value)
@property
@pulumi.getter(name="upgradeMode")
def upgrade_mode(self) -> Optional[pulumi.Input[str]]:
"""
Specifies how Upgrades (e.g. changing the Image/SKU) should be performed to Virtual Machine Instances. Possible values are `Automatic`, `Manual` and `Rolling`. Defaults to `Manual`.
"""
return pulumi.get(self, "upgrade_mode")
@upgrade_mode.setter
def upgrade_mode(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "upgrade_mode", value)
@property
@pulumi.getter(name="userData")
def user_data(self) -> Optional[pulumi.Input[str]]:
"""
The Base64-Encoded User Data which should be used for this Virtual Machine Scale Set.
"""
return pulumi.get(self, "user_data")
@user_data.setter
def user_data(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "user_data", value)
@property
@pulumi.getter(name="vtpmEnabled")
def vtpm_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies if vTPM (Virtual Trusted Plaform Module) and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "vtpm_enabled")
@vtpm_enabled.setter
def vtpm_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "vtpm_enabled", value)
@property
@pulumi.getter(name="winrmListeners")
def winrm_listeners(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetWinrmListenerArgs']]]]:
"""
One or more `winrm_listener` blocks as defined below.
"""
return pulumi.get(self, "winrm_listeners")
@winrm_listeners.setter
def winrm_listeners(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetWinrmListenerArgs']]]]):
pulumi.set(self, "winrm_listeners", value)
@property
@pulumi.getter(name="zoneBalance")
def zone_balance(self) -> Optional[pulumi.Input[bool]]:
"""
Should the Virtual Machines in this Scale Set be strictly evenly distributed across Availability Zones? Defaults to `false`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "zone_balance")
@zone_balance.setter
def zone_balance(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "zone_balance", value)
@property
@pulumi.getter
def zones(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of Availability Zones in which the Virtual Machines in this Scale Set should be created in. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "zones")
@zones.setter
def zones(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "zones", value)
@pulumi.input_type
class _WindowsVirtualMachineScaleSetState:
def __init__(__self__, *,
additional_capabilities: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs']] = None,
additional_unattend_contents: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]]] = None,
admin_password: Optional[pulumi.Input[str]] = None,
admin_username: Optional[pulumi.Input[str]] = None,
automatic_instance_repair: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs']] = None,
automatic_os_upgrade_policy: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs']] = None,
boot_diagnostics: Optional[pulumi.Input['WindowsVirtualMachineScaleSetBootDiagnosticsArgs']] = None,
computer_name_prefix: Optional[pulumi.Input[str]] = None,
custom_data: Optional[pulumi.Input[str]] = None,
data_disks: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetDataDiskArgs']]]] = None,
do_not_run_extensions_on_overprovisioned_machines: Optional[pulumi.Input[bool]] = None,
enable_automatic_updates: Optional[pulumi.Input[bool]] = None,
encryption_at_host_enabled: Optional[pulumi.Input[bool]] = None,
eviction_policy: Optional[pulumi.Input[str]] = None,
extensions: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetExtensionArgs']]]] = None,
extensions_time_budget: Optional[pulumi.Input[str]] = None,
health_probe_id: Optional[pulumi.Input[str]] = None,
identity: Optional[pulumi.Input['WindowsVirtualMachineScaleSetIdentityArgs']] = None,
instances: Optional[pulumi.Input[int]] = None,
license_type: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
max_bid_price: Optional[pulumi.Input[float]] = None,
name: Optional[pulumi.Input[str]] = None,
network_interfaces: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]]] = None,
os_disk: Optional[pulumi.Input['WindowsVirtualMachineScaleSetOsDiskArgs']] = None,
overprovision: Optional[pulumi.Input[bool]] = None,
plan: Optional[pulumi.Input['WindowsVirtualMachineScaleSetPlanArgs']] = None,
platform_fault_domain_count: Optional[pulumi.Input[int]] = None,
priority: Optional[pulumi.Input[str]] = None,
provision_vm_agent: Optional[pulumi.Input[bool]] = None,
proximity_placement_group_id: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
rolling_upgrade_policy: Optional[pulumi.Input['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs']] = None,
scale_in_policy: Optional[pulumi.Input[str]] = None,
secrets: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetSecretArgs']]]] = None,
secure_boot_enabled: Optional[pulumi.Input[bool]] = None,
single_placement_group: Optional[pulumi.Input[bool]] = None,
sku: Optional[pulumi.Input[str]] = None,
source_image_id: Optional[pulumi.Input[str]] = None,
source_image_reference: Optional[pulumi.Input['WindowsVirtualMachineScaleSetSourceImageReferenceArgs']] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
terminate_notification: Optional[pulumi.Input['WindowsVirtualMachineScaleSetTerminateNotificationArgs']] = None,
timezone: Optional[pulumi.Input[str]] = None,
unique_id: Optional[pulumi.Input[str]] = None,
upgrade_mode: Optional[pulumi.Input[str]] = None,
user_data: Optional[pulumi.Input[str]] = None,
vtpm_enabled: Optional[pulumi.Input[bool]] = None,
winrm_listeners: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetWinrmListenerArgs']]]] = None,
zone_balance: Optional[pulumi.Input[bool]] = None,
zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering WindowsVirtualMachineScaleSet resources.
:param pulumi.Input['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs'] additional_capabilities: A `additional_capabilities` block as defined below.
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]] additional_unattend_contents: One or more `additional_unattend_content` blocks as defined below.
:param pulumi.Input[str] admin_password: The Password which should be used for the local-administrator on this Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[str] admin_username: The username of the local administrator on each Virtual Machine Scale Set instance. Changing this forces a new resource to be created.
:param pulumi.Input['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs'] automatic_instance_repair: A `automatic_instance_repair` block as defined below. To enable the automatic instance repair, this Virtual Machine Scale Set must have a valid `health_probe_id` or an [Application Health Extension](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension).
:param pulumi.Input['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs'] automatic_os_upgrade_policy: A `automatic_os_upgrade_policy` block as defined below. This can only be specified when `upgrade_mode` is set to `Automatic`.
:param pulumi.Input['WindowsVirtualMachineScaleSetBootDiagnosticsArgs'] boot_diagnostics: A `boot_diagnostics` block as defined below.
:param pulumi.Input[str] computer_name_prefix: The prefix which should be used for the name of the Virtual Machines in this Scale Set. If unspecified this defaults to the value for the `name` field. If the value of the `name` field is not a valid `computer_name_prefix`, then you must specify `computer_name_prefix`.
:param pulumi.Input[str] custom_data: The Base64-Encoded Custom Data which should be used for this Virtual Machine Scale Set.
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetDataDiskArgs']]] data_disks: One or more `data_disk` blocks as defined below.
:param pulumi.Input[bool] do_not_run_extensions_on_overprovisioned_machines: Should Virtual Machine Extensions be run on Overprovisioned Virtual Machines in the Scale Set? Defaults to `false`.
:param pulumi.Input[bool] enable_automatic_updates: Are automatic updates enabled for this Virtual Machine? Defaults to `true`.
:param pulumi.Input[bool] encryption_at_host_enabled: Should all of the disks (including the temp disk) attached to this Virtual Machine be encrypted by enabling Encryption at Host?
:param pulumi.Input[str] eviction_policy: The Policy which should be used Virtual Machines are Evicted from the Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetExtensionArgs']]] extensions: One or more `extension` blocks as defined below
:param pulumi.Input[str] extensions_time_budget: Specifies the duration allocated for all extensions to start. The time duration should be between `15` minutes and `120` minutes (inclusive) and should be specified in ISO 8601 format. Defaults to `90` minutes (`PT1H30M`).
:param pulumi.Input[str] health_probe_id: The ID of a Load Balancer Probe which should be used to determine the health of an instance. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
:param pulumi.Input['WindowsVirtualMachineScaleSetIdentityArgs'] identity: An `identity` block as defined below.
:param pulumi.Input[int] instances: The number of Virtual Machines in the Scale Set.
:param pulumi.Input[str] license_type: Specifies the type of on-premise license (also known as [Azure Hybrid Use Benefit](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing)) which should be used for this Virtual Machine Scale Set. Possible values are `None`, `Windows_Client` and `Windows_Server`.
:param pulumi.Input[str] location: The Azure location where the Windows Virtual Machine Scale Set should exist. Changing this forces a new resource to be created.
:param pulumi.Input[float] max_bid_price: The maximum price you're willing to pay for each Virtual Machine in this Scale Set, in US Dollars; which must be greater than the current spot price. If this bid price falls below the current spot price the Virtual Machines in the Scale Set will be evicted using the `eviction_policy`. Defaults to `-1`, which means that each Virtual Machine in the Scale Set should not be evicted for price reasons.
:param pulumi.Input[str] name: The name of the Windows Virtual Machine Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]] network_interfaces: One or more `network_interface` blocks as defined below.
:param pulumi.Input['WindowsVirtualMachineScaleSetOsDiskArgs'] os_disk: An `os_disk` block as defined below.
:param pulumi.Input[bool] overprovision: Should Azure over-provision Virtual Machines in this Scale Set? This means that multiple Virtual Machines will be provisioned and Azure will keep the instances which become available first - which improves provisioning success rates and improves deployment time. You're not billed for these over-provisioned VM's and they don't count towards the Subscription Quota. Defaults to `true`.
:param pulumi.Input['WindowsVirtualMachineScaleSetPlanArgs'] plan: A `plan` block as documented below.
:param pulumi.Input[int] platform_fault_domain_count: Specifies the number of fault domains that are used by this Linux Virtual Machine Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[str] priority: The Priority of this Virtual Machine Scale Set. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this value forces a new resource.
:param pulumi.Input[bool] provision_vm_agent: Should the Azure VM Agent be provisioned on each Virtual Machine in the Scale Set? Defaults to `true`. Changing this value forces a new resource to be created.
:param pulumi.Input[str] proximity_placement_group_id: The ID of the Proximity Placement Group in which the Virtual Machine Scale Set should be assigned to. Changing this forces a new resource to be created.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group in which the Windows Virtual Machine Scale Set should be exist. Changing this forces a new resource to be created.
:param pulumi.Input['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs'] rolling_upgrade_policy: A `rolling_upgrade_policy` block as defined below. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
:param pulumi.Input[str] scale_in_policy: The scale-in policy rule that decides which virtual machines are chosen for removal when a Virtual Machine Scale Set is scaled in. Possible values for the scale-in policy rules are `Default`, `NewestVM` and `OldestVM`, defaults to `Default`. For more information about scale in policy, please [refer to this doc](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy).
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetSecretArgs']]] secrets: One or more `secret` blocks as defined below.
:param pulumi.Input[bool] secure_boot_enabled: Specifies if Secure Boot and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[bool] single_placement_group: Should this Virtual Machine Scale Set be limited to a Single Placement Group, which means the number of instances will be capped at 100 Virtual Machines. Defaults to `true`.
:param pulumi.Input[str] sku: The Virtual Machine SKU for the Scale Set, such as `Standard_F2`.
:param pulumi.Input[str] source_image_id: The ID of an Image which each Virtual Machine in this Scale Set should be based on.
:param pulumi.Input['WindowsVirtualMachineScaleSetSourceImageReferenceArgs'] source_image_reference: A `source_image_reference` block as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags which should be assigned to this Virtual Machine Scale Set.
:param pulumi.Input['WindowsVirtualMachineScaleSetTerminateNotificationArgs'] terminate_notification: A `terminate_notification` block as defined below.
:param pulumi.Input[str] timezone: Specifies the time zone of the virtual machine, [the possible values are defined here](https://jackstromberg.com/2017/01/list-of-time-zones-consumed-by-azure/).
:param pulumi.Input[str] unique_id: The Unique ID for this Windows Virtual Machine Scale Set.
:param pulumi.Input[str] upgrade_mode: Specifies how Upgrades (e.g. changing the Image/SKU) should be performed to Virtual Machine Instances. Possible values are `Automatic`, `Manual` and `Rolling`. Defaults to `Manual`.
:param pulumi.Input[str] user_data: The Base64-Encoded User Data which should be used for this Virtual Machine Scale Set.
:param pulumi.Input[bool] vtpm_enabled: Specifies if vTPM (Virtual Trusted Plaform Module) and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetWinrmListenerArgs']]] winrm_listeners: One or more `winrm_listener` blocks as defined below.
:param pulumi.Input[bool] zone_balance: Should the Virtual Machines in this Scale Set be strictly evenly distributed across Availability Zones? Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[str]]] zones: A list of Availability Zones in which the Virtual Machines in this Scale Set should be created in. Changing this forces a new resource to be created.
"""
if additional_capabilities is not None:
pulumi.set(__self__, "additional_capabilities", additional_capabilities)
if additional_unattend_contents is not None:
pulumi.set(__self__, "additional_unattend_contents", additional_unattend_contents)
if admin_password is not None:
pulumi.set(__self__, "admin_password", admin_password)
if admin_username is not None:
pulumi.set(__self__, "admin_username", admin_username)
if automatic_instance_repair is not None:
pulumi.set(__self__, "automatic_instance_repair", automatic_instance_repair)
if automatic_os_upgrade_policy is not None:
pulumi.set(__self__, "automatic_os_upgrade_policy", automatic_os_upgrade_policy)
if boot_diagnostics is not None:
pulumi.set(__self__, "boot_diagnostics", boot_diagnostics)
if computer_name_prefix is not None:
pulumi.set(__self__, "computer_name_prefix", computer_name_prefix)
if custom_data is not None:
pulumi.set(__self__, "custom_data", custom_data)
if data_disks is not None:
pulumi.set(__self__, "data_disks", data_disks)
if do_not_run_extensions_on_overprovisioned_machines is not None:
pulumi.set(__self__, "do_not_run_extensions_on_overprovisioned_machines", do_not_run_extensions_on_overprovisioned_machines)
if enable_automatic_updates is not None:
pulumi.set(__self__, "enable_automatic_updates", enable_automatic_updates)
if encryption_at_host_enabled is not None:
pulumi.set(__self__, "encryption_at_host_enabled", encryption_at_host_enabled)
if eviction_policy is not None:
pulumi.set(__self__, "eviction_policy", eviction_policy)
if extensions is not None:
pulumi.set(__self__, "extensions", extensions)
if extensions_time_budget is not None:
pulumi.set(__self__, "extensions_time_budget", extensions_time_budget)
if health_probe_id is not None:
pulumi.set(__self__, "health_probe_id", health_probe_id)
if identity is not None:
pulumi.set(__self__, "identity", identity)
if instances is not None:
pulumi.set(__self__, "instances", instances)
if license_type is not None:
pulumi.set(__self__, "license_type", license_type)
if location is not None:
pulumi.set(__self__, "location", location)
if max_bid_price is not None:
pulumi.set(__self__, "max_bid_price", max_bid_price)
if name is not None:
pulumi.set(__self__, "name", name)
if network_interfaces is not None:
pulumi.set(__self__, "network_interfaces", network_interfaces)
if os_disk is not None:
pulumi.set(__self__, "os_disk", os_disk)
if overprovision is not None:
pulumi.set(__self__, "overprovision", overprovision)
if plan is not None:
pulumi.set(__self__, "plan", plan)
if platform_fault_domain_count is not None:
pulumi.set(__self__, "platform_fault_domain_count", platform_fault_domain_count)
if priority is not None:
pulumi.set(__self__, "priority", priority)
if provision_vm_agent is not None:
pulumi.set(__self__, "provision_vm_agent", provision_vm_agent)
if proximity_placement_group_id is not None:
pulumi.set(__self__, "proximity_placement_group_id", proximity_placement_group_id)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if rolling_upgrade_policy is not None:
pulumi.set(__self__, "rolling_upgrade_policy", rolling_upgrade_policy)
if scale_in_policy is not None:
pulumi.set(__self__, "scale_in_policy", scale_in_policy)
if secrets is not None:
pulumi.set(__self__, "secrets", secrets)
if secure_boot_enabled is not None:
pulumi.set(__self__, "secure_boot_enabled", secure_boot_enabled)
if single_placement_group is not None:
pulumi.set(__self__, "single_placement_group", single_placement_group)
if sku is not None:
pulumi.set(__self__, "sku", sku)
if source_image_id is not None:
pulumi.set(__self__, "source_image_id", source_image_id)
if source_image_reference is not None:
pulumi.set(__self__, "source_image_reference", source_image_reference)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if terminate_notification is not None:
pulumi.set(__self__, "terminate_notification", terminate_notification)
if timezone is not None:
pulumi.set(__self__, "timezone", timezone)
if unique_id is not None:
pulumi.set(__self__, "unique_id", unique_id)
if upgrade_mode is not None:
pulumi.set(__self__, "upgrade_mode", upgrade_mode)
if user_data is not None:
pulumi.set(__self__, "user_data", user_data)
if vtpm_enabled is not None:
pulumi.set(__self__, "vtpm_enabled", vtpm_enabled)
if winrm_listeners is not None:
pulumi.set(__self__, "winrm_listeners", winrm_listeners)
if zone_balance is not None:
pulumi.set(__self__, "zone_balance", zone_balance)
if zones is not None:
pulumi.set(__self__, "zones", zones)
@property
@pulumi.getter(name="additionalCapabilities")
def additional_capabilities(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs']]:
"""
A `additional_capabilities` block as defined below.
"""
return pulumi.get(self, "additional_capabilities")
@additional_capabilities.setter
def additional_capabilities(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs']]):
pulumi.set(self, "additional_capabilities", value)
@property
@pulumi.getter(name="additionalUnattendContents")
def additional_unattend_contents(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]]]:
"""
One or more `additional_unattend_content` blocks as defined below.
"""
return pulumi.get(self, "additional_unattend_contents")
@additional_unattend_contents.setter
def additional_unattend_contents(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]]]):
pulumi.set(self, "additional_unattend_contents", value)
@property
@pulumi.getter(name="adminPassword")
def admin_password(self) -> Optional[pulumi.Input[str]]:
"""
The Password which should be used for the local-administrator on this Virtual Machine. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "admin_password")
@admin_password.setter
def admin_password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "admin_password", value)
@property
@pulumi.getter(name="adminUsername")
def admin_username(self) -> Optional[pulumi.Input[str]]:
"""
The username of the local administrator on each Virtual Machine Scale Set instance. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "admin_username")
@admin_username.setter
def admin_username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "admin_username", value)
@property
@pulumi.getter(name="automaticInstanceRepair")
def automatic_instance_repair(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs']]:
"""
A `automatic_instance_repair` block as defined below. To enable the automatic instance repair, this Virtual Machine Scale Set must have a valid `health_probe_id` or an [Application Health Extension](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension).
"""
return pulumi.get(self, "automatic_instance_repair")
@automatic_instance_repair.setter
def automatic_instance_repair(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs']]):
pulumi.set(self, "automatic_instance_repair", value)
@property
@pulumi.getter(name="automaticOsUpgradePolicy")
def automatic_os_upgrade_policy(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs']]:
"""
A `automatic_os_upgrade_policy` block as defined below. This can only be specified when `upgrade_mode` is set to `Automatic`.
"""
return pulumi.get(self, "automatic_os_upgrade_policy")
@automatic_os_upgrade_policy.setter
def automatic_os_upgrade_policy(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs']]):
pulumi.set(self, "automatic_os_upgrade_policy", value)
@property
@pulumi.getter(name="bootDiagnostics")
def boot_diagnostics(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetBootDiagnosticsArgs']]:
"""
A `boot_diagnostics` block as defined below.
"""
return pulumi.get(self, "boot_diagnostics")
@boot_diagnostics.setter
def boot_diagnostics(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetBootDiagnosticsArgs']]):
pulumi.set(self, "boot_diagnostics", value)
@property
@pulumi.getter(name="computerNamePrefix")
def computer_name_prefix(self) -> Optional[pulumi.Input[str]]:
"""
The prefix which should be used for the name of the Virtual Machines in this Scale Set. If unspecified this defaults to the value for the `name` field. If the value of the `name` field is not a valid `computer_name_prefix`, then you must specify `computer_name_prefix`.
"""
return pulumi.get(self, "computer_name_prefix")
@computer_name_prefix.setter
def computer_name_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "computer_name_prefix", value)
@property
@pulumi.getter(name="customData")
def custom_data(self) -> Optional[pulumi.Input[str]]:
"""
The Base64-Encoded Custom Data which should be used for this Virtual Machine Scale Set.
"""
return pulumi.get(self, "custom_data")
@custom_data.setter
def custom_data(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "custom_data", value)
@property
@pulumi.getter(name="dataDisks")
def data_disks(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetDataDiskArgs']]]]:
"""
One or more `data_disk` blocks as defined below.
"""
return pulumi.get(self, "data_disks")
@data_disks.setter
def data_disks(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetDataDiskArgs']]]]):
pulumi.set(self, "data_disks", value)
@property
@pulumi.getter(name="doNotRunExtensionsOnOverprovisionedMachines")
def do_not_run_extensions_on_overprovisioned_machines(self) -> Optional[pulumi.Input[bool]]:
"""
Should Virtual Machine Extensions be run on Overprovisioned Virtual Machines in the Scale Set? Defaults to `false`.
"""
return pulumi.get(self, "do_not_run_extensions_on_overprovisioned_machines")
@do_not_run_extensions_on_overprovisioned_machines.setter
def do_not_run_extensions_on_overprovisioned_machines(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "do_not_run_extensions_on_overprovisioned_machines", value)
@property
@pulumi.getter(name="enableAutomaticUpdates")
def enable_automatic_updates(self) -> Optional[pulumi.Input[bool]]:
"""
Are automatic updates enabled for this Virtual Machine? Defaults to `true`.
"""
return pulumi.get(self, "enable_automatic_updates")
@enable_automatic_updates.setter
def enable_automatic_updates(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_automatic_updates", value)
@property
@pulumi.getter(name="encryptionAtHostEnabled")
def encryption_at_host_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Should all of the disks (including the temp disk) attached to this Virtual Machine be encrypted by enabling Encryption at Host?
"""
return pulumi.get(self, "encryption_at_host_enabled")
@encryption_at_host_enabled.setter
def encryption_at_host_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "encryption_at_host_enabled", value)
@property
@pulumi.getter(name="evictionPolicy")
def eviction_policy(self) -> Optional[pulumi.Input[str]]:
"""
The Policy which should be used Virtual Machines are Evicted from the Scale Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "eviction_policy")
@eviction_policy.setter
def eviction_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eviction_policy", value)
@property
@pulumi.getter
def extensions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetExtensionArgs']]]]:
"""
One or more `extension` blocks as defined below
"""
return pulumi.get(self, "extensions")
@extensions.setter
def extensions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetExtensionArgs']]]]):
pulumi.set(self, "extensions", value)
@property
@pulumi.getter(name="extensionsTimeBudget")
def extensions_time_budget(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the duration allocated for all extensions to start. The time duration should be between `15` minutes and `120` minutes (inclusive) and should be specified in ISO 8601 format. Defaults to `90` minutes (`PT1H30M`).
"""
return pulumi.get(self, "extensions_time_budget")
@extensions_time_budget.setter
def extensions_time_budget(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "extensions_time_budget", value)
@property
@pulumi.getter(name="healthProbeId")
def health_probe_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of a Load Balancer Probe which should be used to determine the health of an instance. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
"""
return pulumi.get(self, "health_probe_id")
@health_probe_id.setter
def health_probe_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "health_probe_id", value)
@property
@pulumi.getter
def identity(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetIdentityArgs']]:
"""
An `identity` block as defined below.
"""
return pulumi.get(self, "identity")
@identity.setter
def identity(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetIdentityArgs']]):
pulumi.set(self, "identity", value)
@property
@pulumi.getter
def instances(self) -> Optional[pulumi.Input[int]]:
"""
The number of Virtual Machines in the Scale Set.
"""
return pulumi.get(self, "instances")
@instances.setter
def instances(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "instances", value)
@property
@pulumi.getter(name="licenseType")
def license_type(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the type of on-premise license (also known as [Azure Hybrid Use Benefit](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing)) which should be used for this Virtual Machine Scale Set. Possible values are `None`, `Windows_Client` and `Windows_Server`.
"""
return pulumi.get(self, "license_type")
@license_type.setter
def license_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "license_type", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
The Azure location where the Windows Virtual Machine Scale Set should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter(name="maxBidPrice")
def max_bid_price(self) -> Optional[pulumi.Input[float]]:
"""
The maximum price you're willing to pay for each Virtual Machine in this Scale Set, in US Dollars; which must be greater than the current spot price. If this bid price falls below the current spot price the Virtual Machines in the Scale Set will be evicted using the `eviction_policy`. Defaults to `-1`, which means that each Virtual Machine in the Scale Set should not be evicted for price reasons.
"""
return pulumi.get(self, "max_bid_price")
@max_bid_price.setter
def max_bid_price(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "max_bid_price", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Windows Virtual Machine Scale Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="networkInterfaces")
def network_interfaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]]]:
"""
One or more `network_interface` blocks as defined below.
"""
return pulumi.get(self, "network_interfaces")
@network_interfaces.setter
def network_interfaces(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]]]):
pulumi.set(self, "network_interfaces", value)
@property
@pulumi.getter(name="osDisk")
def os_disk(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetOsDiskArgs']]:
"""
An `os_disk` block as defined below.
"""
return pulumi.get(self, "os_disk")
@os_disk.setter
def os_disk(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetOsDiskArgs']]):
pulumi.set(self, "os_disk", value)
@property
@pulumi.getter
def overprovision(self) -> Optional[pulumi.Input[bool]]:
"""
Should Azure over-provision Virtual Machines in this Scale Set? This means that multiple Virtual Machines will be provisioned and Azure will keep the instances which become available first - which improves provisioning success rates and improves deployment time. You're not billed for these over-provisioned VM's and they don't count towards the Subscription Quota. Defaults to `true`.
"""
return pulumi.get(self, "overprovision")
@overprovision.setter
def overprovision(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "overprovision", value)
@property
@pulumi.getter
def plan(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetPlanArgs']]:
"""
A `plan` block as documented below.
"""
return pulumi.get(self, "plan")
@plan.setter
def plan(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetPlanArgs']]):
pulumi.set(self, "plan", value)
@property
@pulumi.getter(name="platformFaultDomainCount")
def platform_fault_domain_count(self) -> Optional[pulumi.Input[int]]:
"""
Specifies the number of fault domains that are used by this Linux Virtual Machine Scale Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "platform_fault_domain_count")
@platform_fault_domain_count.setter
def platform_fault_domain_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "platform_fault_domain_count", value)
@property
@pulumi.getter
def priority(self) -> Optional[pulumi.Input[str]]:
"""
The Priority of this Virtual Machine Scale Set. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this value forces a new resource.
"""
return pulumi.get(self, "priority")
@priority.setter
def priority(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "priority", value)
@property
@pulumi.getter(name="provisionVmAgent")
def provision_vm_agent(self) -> Optional[pulumi.Input[bool]]:
"""
Should the Azure VM Agent be provisioned on each Virtual Machine in the Scale Set? Defaults to `true`. Changing this value forces a new resource to be created.
"""
return pulumi.get(self, "provision_vm_agent")
@provision_vm_agent.setter
def provision_vm_agent(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "provision_vm_agent", value)
@property
@pulumi.getter(name="proximityPlacementGroupId")
def proximity_placement_group_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Proximity Placement Group in which the Virtual Machine Scale Set should be assigned to. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "proximity_placement_group_id")
@proximity_placement_group_id.setter
def proximity_placement_group_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "proximity_placement_group_id", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Resource Group in which the Windows Virtual Machine Scale Set should be exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="rollingUpgradePolicy")
def rolling_upgrade_policy(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs']]:
"""
A `rolling_upgrade_policy` block as defined below. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
"""
return pulumi.get(self, "rolling_upgrade_policy")
@rolling_upgrade_policy.setter
def rolling_upgrade_policy(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs']]):
pulumi.set(self, "rolling_upgrade_policy", value)
@property
@pulumi.getter(name="scaleInPolicy")
def scale_in_policy(self) -> Optional[pulumi.Input[str]]:
"""
The scale-in policy rule that decides which virtual machines are chosen for removal when a Virtual Machine Scale Set is scaled in. Possible values for the scale-in policy rules are `Default`, `NewestVM` and `OldestVM`, defaults to `Default`. For more information about scale in policy, please [refer to this doc](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy).
"""
return pulumi.get(self, "scale_in_policy")
@scale_in_policy.setter
def scale_in_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scale_in_policy", value)
@property
@pulumi.getter
def secrets(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetSecretArgs']]]]:
"""
One or more `secret` blocks as defined below.
"""
return pulumi.get(self, "secrets")
@secrets.setter
def secrets(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetSecretArgs']]]]):
pulumi.set(self, "secrets", value)
@property
@pulumi.getter(name="secureBootEnabled")
def secure_boot_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies if Secure Boot and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "secure_boot_enabled")
@secure_boot_enabled.setter
def secure_boot_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "secure_boot_enabled", value)
@property
@pulumi.getter(name="singlePlacementGroup")
def single_placement_group(self) -> Optional[pulumi.Input[bool]]:
"""
Should this Virtual Machine Scale Set be limited to a Single Placement Group, which means the number of instances will be capped at 100 Virtual Machines. Defaults to `true`.
"""
return pulumi.get(self, "single_placement_group")
@single_placement_group.setter
def single_placement_group(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "single_placement_group", value)
@property
@pulumi.getter
def sku(self) -> Optional[pulumi.Input[str]]:
"""
The Virtual Machine SKU for the Scale Set, such as `Standard_F2`.
"""
return pulumi.get(self, "sku")
@sku.setter
def sku(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "sku", value)
@property
@pulumi.getter(name="sourceImageId")
def source_image_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of an Image which each Virtual Machine in this Scale Set should be based on.
"""
return pulumi.get(self, "source_image_id")
@source_image_id.setter
def source_image_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_image_id", value)
@property
@pulumi.getter(name="sourceImageReference")
def source_image_reference(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetSourceImageReferenceArgs']]:
"""
A `source_image_reference` block as defined below.
"""
return pulumi.get(self, "source_image_reference")
@source_image_reference.setter
def source_image_reference(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetSourceImageReferenceArgs']]):
pulumi.set(self, "source_image_reference", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags which should be assigned to this Virtual Machine Scale Set.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="terminateNotification")
def terminate_notification(self) -> Optional[pulumi.Input['WindowsVirtualMachineScaleSetTerminateNotificationArgs']]:
"""
A `terminate_notification` block as defined below.
"""
return pulumi.get(self, "terminate_notification")
@terminate_notification.setter
def terminate_notification(self, value: Optional[pulumi.Input['WindowsVirtualMachineScaleSetTerminateNotificationArgs']]):
pulumi.set(self, "terminate_notification", value)
@property
@pulumi.getter
def timezone(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the time zone of the virtual machine, [the possible values are defined here](https://jackstromberg.com/2017/01/list-of-time-zones-consumed-by-azure/).
"""
return pulumi.get(self, "timezone")
@timezone.setter
def timezone(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "timezone", value)
@property
@pulumi.getter(name="uniqueId")
def unique_id(self) -> Optional[pulumi.Input[str]]:
"""
The Unique ID for this Windows Virtual Machine Scale Set.
"""
return pulumi.get(self, "unique_id")
@unique_id.setter
def unique_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "unique_id", value)
@property
@pulumi.getter(name="upgradeMode")
def upgrade_mode(self) -> Optional[pulumi.Input[str]]:
"""
Specifies how Upgrades (e.g. changing the Image/SKU) should be performed to Virtual Machine Instances. Possible values are `Automatic`, `Manual` and `Rolling`. Defaults to `Manual`.
"""
return pulumi.get(self, "upgrade_mode")
@upgrade_mode.setter
def upgrade_mode(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "upgrade_mode", value)
@property
@pulumi.getter(name="userData")
def user_data(self) -> Optional[pulumi.Input[str]]:
"""
The Base64-Encoded User Data which should be used for this Virtual Machine Scale Set.
"""
return pulumi.get(self, "user_data")
@user_data.setter
def user_data(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "user_data", value)
@property
@pulumi.getter(name="vtpmEnabled")
def vtpm_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies if vTPM (Virtual Trusted Plaform Module) and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "vtpm_enabled")
@vtpm_enabled.setter
def vtpm_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "vtpm_enabled", value)
@property
@pulumi.getter(name="winrmListeners")
def winrm_listeners(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetWinrmListenerArgs']]]]:
"""
One or more `winrm_listener` blocks as defined below.
"""
return pulumi.get(self, "winrm_listeners")
@winrm_listeners.setter
def winrm_listeners(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['WindowsVirtualMachineScaleSetWinrmListenerArgs']]]]):
pulumi.set(self, "winrm_listeners", value)
@property
@pulumi.getter(name="zoneBalance")
def zone_balance(self) -> Optional[pulumi.Input[bool]]:
"""
Should the Virtual Machines in this Scale Set be strictly evenly distributed across Availability Zones? Defaults to `false`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "zone_balance")
@zone_balance.setter
def zone_balance(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "zone_balance", value)
@property
@pulumi.getter
def zones(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of Availability Zones in which the Virtual Machines in this Scale Set should be created in. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "zones")
@zones.setter
def zones(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "zones", value)
class WindowsVirtualMachineScaleSet(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
additional_capabilities: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs']]] = None,
additional_unattend_contents: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]]]] = None,
admin_password: Optional[pulumi.Input[str]] = None,
admin_username: Optional[pulumi.Input[str]] = None,
automatic_instance_repair: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs']]] = None,
automatic_os_upgrade_policy: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs']]] = None,
boot_diagnostics: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetBootDiagnosticsArgs']]] = None,
computer_name_prefix: Optional[pulumi.Input[str]] = None,
custom_data: Optional[pulumi.Input[str]] = None,
data_disks: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetDataDiskArgs']]]]] = None,
do_not_run_extensions_on_overprovisioned_machines: Optional[pulumi.Input[bool]] = None,
enable_automatic_updates: Optional[pulumi.Input[bool]] = None,
encryption_at_host_enabled: Optional[pulumi.Input[bool]] = None,
eviction_policy: Optional[pulumi.Input[str]] = None,
extensions: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetExtensionArgs']]]]] = None,
extensions_time_budget: Optional[pulumi.Input[str]] = None,
health_probe_id: Optional[pulumi.Input[str]] = None,
identity: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetIdentityArgs']]] = None,
instances: Optional[pulumi.Input[int]] = None,
license_type: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
max_bid_price: Optional[pulumi.Input[float]] = None,
name: Optional[pulumi.Input[str]] = None,
network_interfaces: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]]]] = None,
os_disk: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetOsDiskArgs']]] = None,
overprovision: Optional[pulumi.Input[bool]] = None,
plan: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetPlanArgs']]] = None,
platform_fault_domain_count: Optional[pulumi.Input[int]] = None,
priority: Optional[pulumi.Input[str]] = None,
provision_vm_agent: Optional[pulumi.Input[bool]] = None,
proximity_placement_group_id: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
rolling_upgrade_policy: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs']]] = None,
scale_in_policy: Optional[pulumi.Input[str]] = None,
secrets: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetSecretArgs']]]]] = None,
secure_boot_enabled: Optional[pulumi.Input[bool]] = None,
single_placement_group: Optional[pulumi.Input[bool]] = None,
sku: Optional[pulumi.Input[str]] = None,
source_image_id: Optional[pulumi.Input[str]] = None,
source_image_reference: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetSourceImageReferenceArgs']]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
terminate_notification: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetTerminateNotificationArgs']]] = None,
timezone: Optional[pulumi.Input[str]] = None,
upgrade_mode: Optional[pulumi.Input[str]] = None,
user_data: Optional[pulumi.Input[str]] = None,
vtpm_enabled: Optional[pulumi.Input[bool]] = None,
winrm_listeners: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetWinrmListenerArgs']]]]] = None,
zone_balance: Optional[pulumi.Input[bool]] = None,
zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
"""
Manages a Windows Virtual Machine Scale Set.
## Disclaimers
> **NOTE:**: All arguments including the administrator login and password will be stored in the raw state as plain-text..
> **NOTE:** This provider will automatically update & reimage the nodes in the Scale Set (if Required) during an Update - this behaviour can be configured using the `features` setting within the Provider block.
> **NOTE:** This resource does not support Unmanaged Disks. If you need to use Unmanaged Disks you can continue to use the `compute.ScaleSet` resource instead
## Example Usage
This example provisions a basic Windows Virtual Machine Scale Set on an internal network.
```python
import pulumi
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_virtual_network = azure.network.VirtualNetwork("exampleVirtualNetwork",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
address_spaces=["10.0.0.0/16"])
internal = azure.network.Subnet("internal",
resource_group_name=example_resource_group.name,
virtual_network_name=example_virtual_network.name,
address_prefixes=["10.0.2.0/24"])
example_windows_virtual_machine_scale_set = azure.compute.WindowsVirtualMachineScaleSet("exampleWindowsVirtualMachineScaleSet",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
sku="Standard_F2",
instances=1,
admin_password="P@55w0rd1234!",
admin_username="adminuser",
source_image_reference=azure.compute.WindowsVirtualMachineScaleSetSourceImageReferenceArgs(
publisher="MicrosoftWindowsServer",
offer="WindowsServer",
sku="2016-Datacenter-Server-Core",
version="latest",
),
os_disk=azure.compute.WindowsVirtualMachineScaleSetOsDiskArgs(
storage_account_type="Standard_LRS",
caching="ReadWrite",
),
network_interfaces=[azure.compute.WindowsVirtualMachineScaleSetNetworkInterfaceArgs(
name="example",
primary=True,
ip_configurations=[azure.compute.WindowsVirtualMachineScaleSetNetworkInterfaceIpConfigurationArgs(
name="internal",
primary=True,
subnet_id=internal.id,
)],
)])
```
## Import
Windows Virtual Machine Scale Sets can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:compute/windowsVirtualMachineScaleSet:WindowsVirtualMachineScaleSet example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Compute/virtualMachineScaleSets/scaleset1
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs']] additional_capabilities: A `additional_capabilities` block as defined below.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]]] additional_unattend_contents: One or more `additional_unattend_content` blocks as defined below.
:param pulumi.Input[str] admin_password: The Password which should be used for the local-administrator on this Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[str] admin_username: The username of the local administrator on each Virtual Machine Scale Set instance. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs']] automatic_instance_repair: A `automatic_instance_repair` block as defined below. To enable the automatic instance repair, this Virtual Machine Scale Set must have a valid `health_probe_id` or an [Application Health Extension](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension).
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs']] automatic_os_upgrade_policy: A `automatic_os_upgrade_policy` block as defined below. This can only be specified when `upgrade_mode` is set to `Automatic`.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetBootDiagnosticsArgs']] boot_diagnostics: A `boot_diagnostics` block as defined below.
:param pulumi.Input[str] computer_name_prefix: The prefix which should be used for the name of the Virtual Machines in this Scale Set. If unspecified this defaults to the value for the `name` field. If the value of the `name` field is not a valid `computer_name_prefix`, then you must specify `computer_name_prefix`.
:param pulumi.Input[str] custom_data: The Base64-Encoded Custom Data which should be used for this Virtual Machine Scale Set.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetDataDiskArgs']]]] data_disks: One or more `data_disk` blocks as defined below.
:param pulumi.Input[bool] do_not_run_extensions_on_overprovisioned_machines: Should Virtual Machine Extensions be run on Overprovisioned Virtual Machines in the Scale Set? Defaults to `false`.
:param pulumi.Input[bool] enable_automatic_updates: Are automatic updates enabled for this Virtual Machine? Defaults to `true`.
:param pulumi.Input[bool] encryption_at_host_enabled: Should all of the disks (including the temp disk) attached to this Virtual Machine be encrypted by enabling Encryption at Host?
:param pulumi.Input[str] eviction_policy: The Policy which should be used Virtual Machines are Evicted from the Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetExtensionArgs']]]] extensions: One or more `extension` blocks as defined below
:param pulumi.Input[str] extensions_time_budget: Specifies the duration allocated for all extensions to start. The time duration should be between `15` minutes and `120` minutes (inclusive) and should be specified in ISO 8601 format. Defaults to `90` minutes (`PT1H30M`).
:param pulumi.Input[str] health_probe_id: The ID of a Load Balancer Probe which should be used to determine the health of an instance. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetIdentityArgs']] identity: An `identity` block as defined below.
:param pulumi.Input[int] instances: The number of Virtual Machines in the Scale Set.
:param pulumi.Input[str] license_type: Specifies the type of on-premise license (also known as [Azure Hybrid Use Benefit](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing)) which should be used for this Virtual Machine Scale Set. Possible values are `None`, `Windows_Client` and `Windows_Server`.
:param pulumi.Input[str] location: The Azure location where the Windows Virtual Machine Scale Set should exist. Changing this forces a new resource to be created.
:param pulumi.Input[float] max_bid_price: The maximum price you're willing to pay for each Virtual Machine in this Scale Set, in US Dollars; which must be greater than the current spot price. If this bid price falls below the current spot price the Virtual Machines in the Scale Set will be evicted using the `eviction_policy`. Defaults to `-1`, which means that each Virtual Machine in the Scale Set should not be evicted for price reasons.
:param pulumi.Input[str] name: The name of the Windows Virtual Machine Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]]] network_interfaces: One or more `network_interface` blocks as defined below.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetOsDiskArgs']] os_disk: An `os_disk` block as defined below.
:param pulumi.Input[bool] overprovision: Should Azure over-provision Virtual Machines in this Scale Set? This means that multiple Virtual Machines will be provisioned and Azure will keep the instances which become available first - which improves provisioning success rates and improves deployment time. You're not billed for these over-provisioned VM's and they don't count towards the Subscription Quota. Defaults to `true`.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetPlanArgs']] plan: A `plan` block as documented below.
:param pulumi.Input[int] platform_fault_domain_count: Specifies the number of fault domains that are used by this Linux Virtual Machine Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[str] priority: The Priority of this Virtual Machine Scale Set. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this value forces a new resource.
:param pulumi.Input[bool] provision_vm_agent: Should the Azure VM Agent be provisioned on each Virtual Machine in the Scale Set? Defaults to `true`. Changing this value forces a new resource to be created.
:param pulumi.Input[str] proximity_placement_group_id: The ID of the Proximity Placement Group in which the Virtual Machine Scale Set should be assigned to. Changing this forces a new resource to be created.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group in which the Windows Virtual Machine Scale Set should be exist. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs']] rolling_upgrade_policy: A `rolling_upgrade_policy` block as defined below. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
:param pulumi.Input[str] scale_in_policy: The scale-in policy rule that decides which virtual machines are chosen for removal when a Virtual Machine Scale Set is scaled in. Possible values for the scale-in policy rules are `Default`, `NewestVM` and `OldestVM`, defaults to `Default`. For more information about scale in policy, please [refer to this doc](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy).
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetSecretArgs']]]] secrets: One or more `secret` blocks as defined below.
:param pulumi.Input[bool] secure_boot_enabled: Specifies if Secure Boot and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[bool] single_placement_group: Should this Virtual Machine Scale Set be limited to a Single Placement Group, which means the number of instances will be capped at 100 Virtual Machines. Defaults to `true`.
:param pulumi.Input[str] sku: The Virtual Machine SKU for the Scale Set, such as `Standard_F2`.
:param pulumi.Input[str] source_image_id: The ID of an Image which each Virtual Machine in this Scale Set should be based on.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetSourceImageReferenceArgs']] source_image_reference: A `source_image_reference` block as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags which should be assigned to this Virtual Machine Scale Set.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetTerminateNotificationArgs']] terminate_notification: A `terminate_notification` block as defined below.
:param pulumi.Input[str] timezone: Specifies the time zone of the virtual machine, [the possible values are defined here](https://jackstromberg.com/2017/01/list-of-time-zones-consumed-by-azure/).
:param pulumi.Input[str] upgrade_mode: Specifies how Upgrades (e.g. changing the Image/SKU) should be performed to Virtual Machine Instances. Possible values are `Automatic`, `Manual` and `Rolling`. Defaults to `Manual`.
:param pulumi.Input[str] user_data: The Base64-Encoded User Data which should be used for this Virtual Machine Scale Set.
:param pulumi.Input[bool] vtpm_enabled: Specifies if vTPM (Virtual Trusted Plaform Module) and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetWinrmListenerArgs']]]] winrm_listeners: One or more `winrm_listener` blocks as defined below.
:param pulumi.Input[bool] zone_balance: Should the Virtual Machines in this Scale Set be strictly evenly distributed across Availability Zones? Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[str]]] zones: A list of Availability Zones in which the Virtual Machines in this Scale Set should be created in. Changing this forces a new resource to be created.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: WindowsVirtualMachineScaleSetArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a Windows Virtual Machine Scale Set.
## Disclaimers
> **NOTE:**: All arguments including the administrator login and password will be stored in the raw state as plain-text..
> **NOTE:** This provider will automatically update & reimage the nodes in the Scale Set (if Required) during an Update - this behaviour can be configured using the `features` setting within the Provider block.
> **NOTE:** This resource does not support Unmanaged Disks. If you need to use Unmanaged Disks you can continue to use the `compute.ScaleSet` resource instead
## Example Usage
This example provisions a basic Windows Virtual Machine Scale Set on an internal network.
```python
import pulumi
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_virtual_network = azure.network.VirtualNetwork("exampleVirtualNetwork",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
address_spaces=["10.0.0.0/16"])
internal = azure.network.Subnet("internal",
resource_group_name=example_resource_group.name,
virtual_network_name=example_virtual_network.name,
address_prefixes=["10.0.2.0/24"])
example_windows_virtual_machine_scale_set = azure.compute.WindowsVirtualMachineScaleSet("exampleWindowsVirtualMachineScaleSet",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
sku="Standard_F2",
instances=1,
admin_password="P@55w0rd1234!",
admin_username="adminuser",
source_image_reference=azure.compute.WindowsVirtualMachineScaleSetSourceImageReferenceArgs(
publisher="MicrosoftWindowsServer",
offer="WindowsServer",
sku="2016-Datacenter-Server-Core",
version="latest",
),
os_disk=azure.compute.WindowsVirtualMachineScaleSetOsDiskArgs(
storage_account_type="Standard_LRS",
caching="ReadWrite",
),
network_interfaces=[azure.compute.WindowsVirtualMachineScaleSetNetworkInterfaceArgs(
name="example",
primary=True,
ip_configurations=[azure.compute.WindowsVirtualMachineScaleSetNetworkInterfaceIpConfigurationArgs(
name="internal",
primary=True,
subnet_id=internal.id,
)],
)])
```
## Import
Windows Virtual Machine Scale Sets can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:compute/windowsVirtualMachineScaleSet:WindowsVirtualMachineScaleSet example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Compute/virtualMachineScaleSets/scaleset1
```
:param str resource_name: The name of the resource.
:param WindowsVirtualMachineScaleSetArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(WindowsVirtualMachineScaleSetArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
additional_capabilities: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs']]] = None,
additional_unattend_contents: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]]]] = None,
admin_password: Optional[pulumi.Input[str]] = None,
admin_username: Optional[pulumi.Input[str]] = None,
automatic_instance_repair: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs']]] = None,
automatic_os_upgrade_policy: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs']]] = None,
boot_diagnostics: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetBootDiagnosticsArgs']]] = None,
computer_name_prefix: Optional[pulumi.Input[str]] = None,
custom_data: Optional[pulumi.Input[str]] = None,
data_disks: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetDataDiskArgs']]]]] = None,
do_not_run_extensions_on_overprovisioned_machines: Optional[pulumi.Input[bool]] = None,
enable_automatic_updates: Optional[pulumi.Input[bool]] = None,
encryption_at_host_enabled: Optional[pulumi.Input[bool]] = None,
eviction_policy: Optional[pulumi.Input[str]] = None,
extensions: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetExtensionArgs']]]]] = None,
extensions_time_budget: Optional[pulumi.Input[str]] = None,
health_probe_id: Optional[pulumi.Input[str]] = None,
identity: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetIdentityArgs']]] = None,
instances: Optional[pulumi.Input[int]] = None,
license_type: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
max_bid_price: Optional[pulumi.Input[float]] = None,
name: Optional[pulumi.Input[str]] = None,
network_interfaces: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]]]] = None,
os_disk: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetOsDiskArgs']]] = None,
overprovision: Optional[pulumi.Input[bool]] = None,
plan: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetPlanArgs']]] = None,
platform_fault_domain_count: Optional[pulumi.Input[int]] = None,
priority: Optional[pulumi.Input[str]] = None,
provision_vm_agent: Optional[pulumi.Input[bool]] = None,
proximity_placement_group_id: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
rolling_upgrade_policy: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs']]] = None,
scale_in_policy: Optional[pulumi.Input[str]] = None,
secrets: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetSecretArgs']]]]] = None,
secure_boot_enabled: Optional[pulumi.Input[bool]] = None,
single_placement_group: Optional[pulumi.Input[bool]] = None,
sku: Optional[pulumi.Input[str]] = None,
source_image_id: Optional[pulumi.Input[str]] = None,
source_image_reference: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetSourceImageReferenceArgs']]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
terminate_notification: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetTerminateNotificationArgs']]] = None,
timezone: Optional[pulumi.Input[str]] = None,
upgrade_mode: Optional[pulumi.Input[str]] = None,
user_data: Optional[pulumi.Input[str]] = None,
vtpm_enabled: Optional[pulumi.Input[bool]] = None,
winrm_listeners: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetWinrmListenerArgs']]]]] = None,
zone_balance: Optional[pulumi.Input[bool]] = None,
zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = WindowsVirtualMachineScaleSetArgs.__new__(WindowsVirtualMachineScaleSetArgs)
__props__.__dict__["additional_capabilities"] = additional_capabilities
__props__.__dict__["additional_unattend_contents"] = additional_unattend_contents
if admin_password is None and not opts.urn:
raise TypeError("Missing required property 'admin_password'")
__props__.__dict__["admin_password"] = admin_password
if admin_username is None and not opts.urn:
raise TypeError("Missing required property 'admin_username'")
__props__.__dict__["admin_username"] = admin_username
__props__.__dict__["automatic_instance_repair"] = automatic_instance_repair
__props__.__dict__["automatic_os_upgrade_policy"] = automatic_os_upgrade_policy
__props__.__dict__["boot_diagnostics"] = boot_diagnostics
__props__.__dict__["computer_name_prefix"] = computer_name_prefix
__props__.__dict__["custom_data"] = custom_data
__props__.__dict__["data_disks"] = data_disks
__props__.__dict__["do_not_run_extensions_on_overprovisioned_machines"] = do_not_run_extensions_on_overprovisioned_machines
__props__.__dict__["enable_automatic_updates"] = enable_automatic_updates
__props__.__dict__["encryption_at_host_enabled"] = encryption_at_host_enabled
__props__.__dict__["eviction_policy"] = eviction_policy
__props__.__dict__["extensions"] = extensions
__props__.__dict__["extensions_time_budget"] = extensions_time_budget
__props__.__dict__["health_probe_id"] = health_probe_id
__props__.__dict__["identity"] = identity
if instances is None and not opts.urn:
raise TypeError("Missing required property 'instances'")
__props__.__dict__["instances"] = instances
__props__.__dict__["license_type"] = license_type
__props__.__dict__["location"] = location
__props__.__dict__["max_bid_price"] = max_bid_price
__props__.__dict__["name"] = name
if network_interfaces is None and not opts.urn:
raise TypeError("Missing required property 'network_interfaces'")
__props__.__dict__["network_interfaces"] = network_interfaces
if os_disk is None and not opts.urn:
raise TypeError("Missing required property 'os_disk'")
__props__.__dict__["os_disk"] = os_disk
__props__.__dict__["overprovision"] = overprovision
__props__.__dict__["plan"] = plan
__props__.__dict__["platform_fault_domain_count"] = platform_fault_domain_count
__props__.__dict__["priority"] = priority
__props__.__dict__["provision_vm_agent"] = provision_vm_agent
__props__.__dict__["proximity_placement_group_id"] = proximity_placement_group_id
if resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'resource_group_name'")
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["rolling_upgrade_policy"] = rolling_upgrade_policy
__props__.__dict__["scale_in_policy"] = scale_in_policy
__props__.__dict__["secrets"] = secrets
__props__.__dict__["secure_boot_enabled"] = secure_boot_enabled
__props__.__dict__["single_placement_group"] = single_placement_group
if sku is None and not opts.urn:
raise TypeError("Missing required property 'sku'")
__props__.__dict__["sku"] = sku
__props__.__dict__["source_image_id"] = source_image_id
__props__.__dict__["source_image_reference"] = source_image_reference
__props__.__dict__["tags"] = tags
__props__.__dict__["terminate_notification"] = terminate_notification
__props__.__dict__["timezone"] = timezone
__props__.__dict__["upgrade_mode"] = upgrade_mode
__props__.__dict__["user_data"] = user_data
__props__.__dict__["vtpm_enabled"] = vtpm_enabled
__props__.__dict__["winrm_listeners"] = winrm_listeners
__props__.__dict__["zone_balance"] = zone_balance
__props__.__dict__["zones"] = zones
__props__.__dict__["unique_id"] = None
super(WindowsVirtualMachineScaleSet, __self__).__init__(
'azure:compute/windowsVirtualMachineScaleSet:WindowsVirtualMachineScaleSet',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
additional_capabilities: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs']]] = None,
additional_unattend_contents: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]]]] = None,
admin_password: Optional[pulumi.Input[str]] = None,
admin_username: Optional[pulumi.Input[str]] = None,
automatic_instance_repair: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs']]] = None,
automatic_os_upgrade_policy: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs']]] = None,
boot_diagnostics: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetBootDiagnosticsArgs']]] = None,
computer_name_prefix: Optional[pulumi.Input[str]] = None,
custom_data: Optional[pulumi.Input[str]] = None,
data_disks: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetDataDiskArgs']]]]] = None,
do_not_run_extensions_on_overprovisioned_machines: Optional[pulumi.Input[bool]] = None,
enable_automatic_updates: Optional[pulumi.Input[bool]] = None,
encryption_at_host_enabled: Optional[pulumi.Input[bool]] = None,
eviction_policy: Optional[pulumi.Input[str]] = None,
extensions: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetExtensionArgs']]]]] = None,
extensions_time_budget: Optional[pulumi.Input[str]] = None,
health_probe_id: Optional[pulumi.Input[str]] = None,
identity: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetIdentityArgs']]] = None,
instances: Optional[pulumi.Input[int]] = None,
license_type: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
max_bid_price: Optional[pulumi.Input[float]] = None,
name: Optional[pulumi.Input[str]] = None,
network_interfaces: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]]]] = None,
os_disk: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetOsDiskArgs']]] = None,
overprovision: Optional[pulumi.Input[bool]] = None,
plan: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetPlanArgs']]] = None,
platform_fault_domain_count: Optional[pulumi.Input[int]] = None,
priority: Optional[pulumi.Input[str]] = None,
provision_vm_agent: Optional[pulumi.Input[bool]] = None,
proximity_placement_group_id: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
rolling_upgrade_policy: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs']]] = None,
scale_in_policy: Optional[pulumi.Input[str]] = None,
secrets: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetSecretArgs']]]]] = None,
secure_boot_enabled: Optional[pulumi.Input[bool]] = None,
single_placement_group: Optional[pulumi.Input[bool]] = None,
sku: Optional[pulumi.Input[str]] = None,
source_image_id: Optional[pulumi.Input[str]] = None,
source_image_reference: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetSourceImageReferenceArgs']]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
terminate_notification: Optional[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetTerminateNotificationArgs']]] = None,
timezone: Optional[pulumi.Input[str]] = None,
unique_id: Optional[pulumi.Input[str]] = None,
upgrade_mode: Optional[pulumi.Input[str]] = None,
user_data: Optional[pulumi.Input[str]] = None,
vtpm_enabled: Optional[pulumi.Input[bool]] = None,
winrm_listeners: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetWinrmListenerArgs']]]]] = None,
zone_balance: Optional[pulumi.Input[bool]] = None,
zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None) -> 'WindowsVirtualMachineScaleSet':
"""
Get an existing WindowsVirtualMachineScaleSet resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAdditionalCapabilitiesArgs']] additional_capabilities: A `additional_capabilities` block as defined below.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAdditionalUnattendContentArgs']]]] additional_unattend_contents: One or more `additional_unattend_content` blocks as defined below.
:param pulumi.Input[str] admin_password: The Password which should be used for the local-administrator on this Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[str] admin_username: The username of the local administrator on each Virtual Machine Scale Set instance. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAutomaticInstanceRepairArgs']] automatic_instance_repair: A `automatic_instance_repair` block as defined below. To enable the automatic instance repair, this Virtual Machine Scale Set must have a valid `health_probe_id` or an [Application Health Extension](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension).
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicyArgs']] automatic_os_upgrade_policy: A `automatic_os_upgrade_policy` block as defined below. This can only be specified when `upgrade_mode` is set to `Automatic`.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetBootDiagnosticsArgs']] boot_diagnostics: A `boot_diagnostics` block as defined below.
:param pulumi.Input[str] computer_name_prefix: The prefix which should be used for the name of the Virtual Machines in this Scale Set. If unspecified this defaults to the value for the `name` field. If the value of the `name` field is not a valid `computer_name_prefix`, then you must specify `computer_name_prefix`.
:param pulumi.Input[str] custom_data: The Base64-Encoded Custom Data which should be used for this Virtual Machine Scale Set.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetDataDiskArgs']]]] data_disks: One or more `data_disk` blocks as defined below.
:param pulumi.Input[bool] do_not_run_extensions_on_overprovisioned_machines: Should Virtual Machine Extensions be run on Overprovisioned Virtual Machines in the Scale Set? Defaults to `false`.
:param pulumi.Input[bool] enable_automatic_updates: Are automatic updates enabled for this Virtual Machine? Defaults to `true`.
:param pulumi.Input[bool] encryption_at_host_enabled: Should all of the disks (including the temp disk) attached to this Virtual Machine be encrypted by enabling Encryption at Host?
:param pulumi.Input[str] eviction_policy: The Policy which should be used Virtual Machines are Evicted from the Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetExtensionArgs']]]] extensions: One or more `extension` blocks as defined below
:param pulumi.Input[str] extensions_time_budget: Specifies the duration allocated for all extensions to start. The time duration should be between `15` minutes and `120` minutes (inclusive) and should be specified in ISO 8601 format. Defaults to `90` minutes (`PT1H30M`).
:param pulumi.Input[str] health_probe_id: The ID of a Load Balancer Probe which should be used to determine the health of an instance. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetIdentityArgs']] identity: An `identity` block as defined below.
:param pulumi.Input[int] instances: The number of Virtual Machines in the Scale Set.
:param pulumi.Input[str] license_type: Specifies the type of on-premise license (also known as [Azure Hybrid Use Benefit](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing)) which should be used for this Virtual Machine Scale Set. Possible values are `None`, `Windows_Client` and `Windows_Server`.
:param pulumi.Input[str] location: The Azure location where the Windows Virtual Machine Scale Set should exist. Changing this forces a new resource to be created.
:param pulumi.Input[float] max_bid_price: The maximum price you're willing to pay for each Virtual Machine in this Scale Set, in US Dollars; which must be greater than the current spot price. If this bid price falls below the current spot price the Virtual Machines in the Scale Set will be evicted using the `eviction_policy`. Defaults to `-1`, which means that each Virtual Machine in the Scale Set should not be evicted for price reasons.
:param pulumi.Input[str] name: The name of the Windows Virtual Machine Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetNetworkInterfaceArgs']]]] network_interfaces: One or more `network_interface` blocks as defined below.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetOsDiskArgs']] os_disk: An `os_disk` block as defined below.
:param pulumi.Input[bool] overprovision: Should Azure over-provision Virtual Machines in this Scale Set? This means that multiple Virtual Machines will be provisioned and Azure will keep the instances which become available first - which improves provisioning success rates and improves deployment time. You're not billed for these over-provisioned VM's and they don't count towards the Subscription Quota. Defaults to `true`.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetPlanArgs']] plan: A `plan` block as documented below.
:param pulumi.Input[int] platform_fault_domain_count: Specifies the number of fault domains that are used by this Linux Virtual Machine Scale Set. Changing this forces a new resource to be created.
:param pulumi.Input[str] priority: The Priority of this Virtual Machine Scale Set. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this value forces a new resource.
:param pulumi.Input[bool] provision_vm_agent: Should the Azure VM Agent be provisioned on each Virtual Machine in the Scale Set? Defaults to `true`. Changing this value forces a new resource to be created.
:param pulumi.Input[str] proximity_placement_group_id: The ID of the Proximity Placement Group in which the Virtual Machine Scale Set should be assigned to. Changing this forces a new resource to be created.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group in which the Windows Virtual Machine Scale Set should be exist. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetRollingUpgradePolicyArgs']] rolling_upgrade_policy: A `rolling_upgrade_policy` block as defined below. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
:param pulumi.Input[str] scale_in_policy: The scale-in policy rule that decides which virtual machines are chosen for removal when a Virtual Machine Scale Set is scaled in. Possible values for the scale-in policy rules are `Default`, `NewestVM` and `OldestVM`, defaults to `Default`. For more information about scale in policy, please [refer to this doc](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy).
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetSecretArgs']]]] secrets: One or more `secret` blocks as defined below.
:param pulumi.Input[bool] secure_boot_enabled: Specifies if Secure Boot and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[bool] single_placement_group: Should this Virtual Machine Scale Set be limited to a Single Placement Group, which means the number of instances will be capped at 100 Virtual Machines. Defaults to `true`.
:param pulumi.Input[str] sku: The Virtual Machine SKU for the Scale Set, such as `Standard_F2`.
:param pulumi.Input[str] source_image_id: The ID of an Image which each Virtual Machine in this Scale Set should be based on.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetSourceImageReferenceArgs']] source_image_reference: A `source_image_reference` block as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags which should be assigned to this Virtual Machine Scale Set.
:param pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetTerminateNotificationArgs']] terminate_notification: A `terminate_notification` block as defined below.
:param pulumi.Input[str] timezone: Specifies the time zone of the virtual machine, [the possible values are defined here](https://jackstromberg.com/2017/01/list-of-time-zones-consumed-by-azure/).
:param pulumi.Input[str] unique_id: The Unique ID for this Windows Virtual Machine Scale Set.
:param pulumi.Input[str] upgrade_mode: Specifies how Upgrades (e.g. changing the Image/SKU) should be performed to Virtual Machine Instances. Possible values are `Automatic`, `Manual` and `Rolling`. Defaults to `Manual`.
:param pulumi.Input[str] user_data: The Base64-Encoded User Data which should be used for this Virtual Machine Scale Set.
:param pulumi.Input[bool] vtpm_enabled: Specifies if vTPM (Virtual Trusted Plaform Module) and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['WindowsVirtualMachineScaleSetWinrmListenerArgs']]]] winrm_listeners: One or more `winrm_listener` blocks as defined below.
:param pulumi.Input[bool] zone_balance: Should the Virtual Machines in this Scale Set be strictly evenly distributed across Availability Zones? Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[str]]] zones: A list of Availability Zones in which the Virtual Machines in this Scale Set should be created in. Changing this forces a new resource to be created.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _WindowsVirtualMachineScaleSetState.__new__(_WindowsVirtualMachineScaleSetState)
__props__.__dict__["additional_capabilities"] = additional_capabilities
__props__.__dict__["additional_unattend_contents"] = additional_unattend_contents
__props__.__dict__["admin_password"] = admin_password
__props__.__dict__["admin_username"] = admin_username
__props__.__dict__["automatic_instance_repair"] = automatic_instance_repair
__props__.__dict__["automatic_os_upgrade_policy"] = automatic_os_upgrade_policy
__props__.__dict__["boot_diagnostics"] = boot_diagnostics
__props__.__dict__["computer_name_prefix"] = computer_name_prefix
__props__.__dict__["custom_data"] = custom_data
__props__.__dict__["data_disks"] = data_disks
__props__.__dict__["do_not_run_extensions_on_overprovisioned_machines"] = do_not_run_extensions_on_overprovisioned_machines
__props__.__dict__["enable_automatic_updates"] = enable_automatic_updates
__props__.__dict__["encryption_at_host_enabled"] = encryption_at_host_enabled
__props__.__dict__["eviction_policy"] = eviction_policy
__props__.__dict__["extensions"] = extensions
__props__.__dict__["extensions_time_budget"] = extensions_time_budget
__props__.__dict__["health_probe_id"] = health_probe_id
__props__.__dict__["identity"] = identity
__props__.__dict__["instances"] = instances
__props__.__dict__["license_type"] = license_type
__props__.__dict__["location"] = location
__props__.__dict__["max_bid_price"] = max_bid_price
__props__.__dict__["name"] = name
__props__.__dict__["network_interfaces"] = network_interfaces
__props__.__dict__["os_disk"] = os_disk
__props__.__dict__["overprovision"] = overprovision
__props__.__dict__["plan"] = plan
__props__.__dict__["platform_fault_domain_count"] = platform_fault_domain_count
__props__.__dict__["priority"] = priority
__props__.__dict__["provision_vm_agent"] = provision_vm_agent
__props__.__dict__["proximity_placement_group_id"] = proximity_placement_group_id
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["rolling_upgrade_policy"] = rolling_upgrade_policy
__props__.__dict__["scale_in_policy"] = scale_in_policy
__props__.__dict__["secrets"] = secrets
__props__.__dict__["secure_boot_enabled"] = secure_boot_enabled
__props__.__dict__["single_placement_group"] = single_placement_group
__props__.__dict__["sku"] = sku
__props__.__dict__["source_image_id"] = source_image_id
__props__.__dict__["source_image_reference"] = source_image_reference
__props__.__dict__["tags"] = tags
__props__.__dict__["terminate_notification"] = terminate_notification
__props__.__dict__["timezone"] = timezone
__props__.__dict__["unique_id"] = unique_id
__props__.__dict__["upgrade_mode"] = upgrade_mode
__props__.__dict__["user_data"] = user_data
__props__.__dict__["vtpm_enabled"] = vtpm_enabled
__props__.__dict__["winrm_listeners"] = winrm_listeners
__props__.__dict__["zone_balance"] = zone_balance
__props__.__dict__["zones"] = zones
return WindowsVirtualMachineScaleSet(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="additionalCapabilities")
def additional_capabilities(self) -> pulumi.Output[Optional['outputs.WindowsVirtualMachineScaleSetAdditionalCapabilities']]:
"""
A `additional_capabilities` block as defined below.
"""
return pulumi.get(self, "additional_capabilities")
@property
@pulumi.getter(name="additionalUnattendContents")
def additional_unattend_contents(self) -> pulumi.Output[Optional[Sequence['outputs.WindowsVirtualMachineScaleSetAdditionalUnattendContent']]]:
"""
One or more `additional_unattend_content` blocks as defined below.
"""
return pulumi.get(self, "additional_unattend_contents")
@property
@pulumi.getter(name="adminPassword")
def admin_password(self) -> pulumi.Output[str]:
"""
The Password which should be used for the local-administrator on this Virtual Machine. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "admin_password")
@property
@pulumi.getter(name="adminUsername")
def admin_username(self) -> pulumi.Output[str]:
"""
The username of the local administrator on each Virtual Machine Scale Set instance. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "admin_username")
@property
@pulumi.getter(name="automaticInstanceRepair")
def automatic_instance_repair(self) -> pulumi.Output['outputs.WindowsVirtualMachineScaleSetAutomaticInstanceRepair']:
"""
A `automatic_instance_repair` block as defined below. To enable the automatic instance repair, this Virtual Machine Scale Set must have a valid `health_probe_id` or an [Application Health Extension](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension).
"""
return pulumi.get(self, "automatic_instance_repair")
@property
@pulumi.getter(name="automaticOsUpgradePolicy")
def automatic_os_upgrade_policy(self) -> pulumi.Output[Optional['outputs.WindowsVirtualMachineScaleSetAutomaticOsUpgradePolicy']]:
"""
A `automatic_os_upgrade_policy` block as defined below. This can only be specified when `upgrade_mode` is set to `Automatic`.
"""
return pulumi.get(self, "automatic_os_upgrade_policy")
@property
@pulumi.getter(name="bootDiagnostics")
def boot_diagnostics(self) -> pulumi.Output[Optional['outputs.WindowsVirtualMachineScaleSetBootDiagnostics']]:
"""
A `boot_diagnostics` block as defined below.
"""
return pulumi.get(self, "boot_diagnostics")
@property
@pulumi.getter(name="computerNamePrefix")
def computer_name_prefix(self) -> pulumi.Output[str]:
"""
The prefix which should be used for the name of the Virtual Machines in this Scale Set. If unspecified this defaults to the value for the `name` field. If the value of the `name` field is not a valid `computer_name_prefix`, then you must specify `computer_name_prefix`.
"""
return pulumi.get(self, "computer_name_prefix")
@property
@pulumi.getter(name="customData")
def custom_data(self) -> pulumi.Output[Optional[str]]:
"""
The Base64-Encoded Custom Data which should be used for this Virtual Machine Scale Set.
"""
return pulumi.get(self, "custom_data")
@property
@pulumi.getter(name="dataDisks")
def data_disks(self) -> pulumi.Output[Optional[Sequence['outputs.WindowsVirtualMachineScaleSetDataDisk']]]:
"""
One or more `data_disk` blocks as defined below.
"""
return pulumi.get(self, "data_disks")
@property
@pulumi.getter(name="doNotRunExtensionsOnOverprovisionedMachines")
def do_not_run_extensions_on_overprovisioned_machines(self) -> pulumi.Output[Optional[bool]]:
"""
Should Virtual Machine Extensions be run on Overprovisioned Virtual Machines in the Scale Set? Defaults to `false`.
"""
return pulumi.get(self, "do_not_run_extensions_on_overprovisioned_machines")
@property
@pulumi.getter(name="enableAutomaticUpdates")
def enable_automatic_updates(self) -> pulumi.Output[Optional[bool]]:
"""
Are automatic updates enabled for this Virtual Machine? Defaults to `true`.
"""
return pulumi.get(self, "enable_automatic_updates")
@property
@pulumi.getter(name="encryptionAtHostEnabled")
def encryption_at_host_enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Should all of the disks (including the temp disk) attached to this Virtual Machine be encrypted by enabling Encryption at Host?
"""
return pulumi.get(self, "encryption_at_host_enabled")
@property
@pulumi.getter(name="evictionPolicy")
def eviction_policy(self) -> pulumi.Output[Optional[str]]:
"""
The Policy which should be used Virtual Machines are Evicted from the Scale Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "eviction_policy")
@property
@pulumi.getter
def extensions(self) -> pulumi.Output[Sequence['outputs.WindowsVirtualMachineScaleSetExtension']]:
"""
One or more `extension` blocks as defined below
"""
return pulumi.get(self, "extensions")
@property
@pulumi.getter(name="extensionsTimeBudget")
def extensions_time_budget(self) -> pulumi.Output[Optional[str]]:
"""
Specifies the duration allocated for all extensions to start. The time duration should be between `15` minutes and `120` minutes (inclusive) and should be specified in ISO 8601 format. Defaults to `90` minutes (`PT1H30M`).
"""
return pulumi.get(self, "extensions_time_budget")
@property
@pulumi.getter(name="healthProbeId")
def health_probe_id(self) -> pulumi.Output[Optional[str]]:
"""
The ID of a Load Balancer Probe which should be used to determine the health of an instance. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
"""
return pulumi.get(self, "health_probe_id")
@property
@pulumi.getter
def identity(self) -> pulumi.Output[Optional['outputs.WindowsVirtualMachineScaleSetIdentity']]:
"""
An `identity` block as defined below.
"""
return pulumi.get(self, "identity")
@property
@pulumi.getter
def instances(self) -> pulumi.Output[int]:
"""
The number of Virtual Machines in the Scale Set.
"""
return pulumi.get(self, "instances")
@property
@pulumi.getter(name="licenseType")
def license_type(self) -> pulumi.Output[Optional[str]]:
"""
Specifies the type of on-premise license (also known as [Azure Hybrid Use Benefit](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing)) which should be used for this Virtual Machine Scale Set. Possible values are `None`, `Windows_Client` and `Windows_Server`.
"""
return pulumi.get(self, "license_type")
@property
@pulumi.getter
def location(self) -> pulumi.Output[str]:
"""
The Azure location where the Windows Virtual Machine Scale Set should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@property
@pulumi.getter(name="maxBidPrice")
def max_bid_price(self) -> pulumi.Output[Optional[float]]:
"""
The maximum price you're willing to pay for each Virtual Machine in this Scale Set, in US Dollars; which must be greater than the current spot price. If this bid price falls below the current spot price the Virtual Machines in the Scale Set will be evicted using the `eviction_policy`. Defaults to `-1`, which means that each Virtual Machine in the Scale Set should not be evicted for price reasons.
"""
return pulumi.get(self, "max_bid_price")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the Windows Virtual Machine Scale Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="networkInterfaces")
def network_interfaces(self) -> pulumi.Output[Sequence['outputs.WindowsVirtualMachineScaleSetNetworkInterface']]:
"""
One or more `network_interface` blocks as defined below.
"""
return pulumi.get(self, "network_interfaces")
@property
@pulumi.getter(name="osDisk")
def os_disk(self) -> pulumi.Output['outputs.WindowsVirtualMachineScaleSetOsDisk']:
"""
An `os_disk` block as defined below.
"""
return pulumi.get(self, "os_disk")
@property
@pulumi.getter
def overprovision(self) -> pulumi.Output[Optional[bool]]:
"""
Should Azure over-provision Virtual Machines in this Scale Set? This means that multiple Virtual Machines will be provisioned and Azure will keep the instances which become available first - which improves provisioning success rates and improves deployment time. You're not billed for these over-provisioned VM's and they don't count towards the Subscription Quota. Defaults to `true`.
"""
return pulumi.get(self, "overprovision")
@property
@pulumi.getter
def plan(self) -> pulumi.Output[Optional['outputs.WindowsVirtualMachineScaleSetPlan']]:
"""
A `plan` block as documented below.
"""
return pulumi.get(self, "plan")
@property
@pulumi.getter(name="platformFaultDomainCount")
def platform_fault_domain_count(self) -> pulumi.Output[int]:
"""
Specifies the number of fault domains that are used by this Linux Virtual Machine Scale Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "platform_fault_domain_count")
@property
@pulumi.getter
def priority(self) -> pulumi.Output[Optional[str]]:
"""
The Priority of this Virtual Machine Scale Set. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this value forces a new resource.
"""
return pulumi.get(self, "priority")
@property
@pulumi.getter(name="provisionVmAgent")
def provision_vm_agent(self) -> pulumi.Output[Optional[bool]]:
"""
Should the Azure VM Agent be provisioned on each Virtual Machine in the Scale Set? Defaults to `true`. Changing this value forces a new resource to be created.
"""
return pulumi.get(self, "provision_vm_agent")
@property
@pulumi.getter(name="proximityPlacementGroupId")
def proximity_placement_group_id(self) -> pulumi.Output[Optional[str]]:
"""
The ID of the Proximity Placement Group in which the Virtual Machine Scale Set should be assigned to. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "proximity_placement_group_id")
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Output[str]:
"""
The name of the Resource Group in which the Windows Virtual Machine Scale Set should be exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@property
@pulumi.getter(name="rollingUpgradePolicy")
def rolling_upgrade_policy(self) -> pulumi.Output[Optional['outputs.WindowsVirtualMachineScaleSetRollingUpgradePolicy']]:
"""
A `rolling_upgrade_policy` block as defined below. This is Required and can only be specified when `upgrade_mode` is set to `Automatic` or `Rolling`.
"""
return pulumi.get(self, "rolling_upgrade_policy")
@property
@pulumi.getter(name="scaleInPolicy")
def scale_in_policy(self) -> pulumi.Output[Optional[str]]:
"""
The scale-in policy rule that decides which virtual machines are chosen for removal when a Virtual Machine Scale Set is scaled in. Possible values for the scale-in policy rules are `Default`, `NewestVM` and `OldestVM`, defaults to `Default`. For more information about scale in policy, please [refer to this doc](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy).
"""
return pulumi.get(self, "scale_in_policy")
@property
@pulumi.getter
def secrets(self) -> pulumi.Output[Optional[Sequence['outputs.WindowsVirtualMachineScaleSetSecret']]]:
"""
One or more `secret` blocks as defined below.
"""
return pulumi.get(self, "secrets")
@property
@pulumi.getter(name="secureBootEnabled")
def secure_boot_enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Specifies if Secure Boot and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "secure_boot_enabled")
@property
@pulumi.getter(name="singlePlacementGroup")
def single_placement_group(self) -> pulumi.Output[Optional[bool]]:
"""
Should this Virtual Machine Scale Set be limited to a Single Placement Group, which means the number of instances will be capped at 100 Virtual Machines. Defaults to `true`.
"""
return pulumi.get(self, "single_placement_group")
@property
@pulumi.getter
def sku(self) -> pulumi.Output[str]:
"""
The Virtual Machine SKU for the Scale Set, such as `Standard_F2`.
"""
return pulumi.get(self, "sku")
@property
@pulumi.getter(name="sourceImageId")
def source_image_id(self) -> pulumi.Output[Optional[str]]:
"""
The ID of an Image which each Virtual Machine in this Scale Set should be based on.
"""
return pulumi.get(self, "source_image_id")
@property
@pulumi.getter(name="sourceImageReference")
def source_image_reference(self) -> pulumi.Output[Optional['outputs.WindowsVirtualMachineScaleSetSourceImageReference']]:
"""
A `source_image_reference` block as defined below.
"""
return pulumi.get(self, "source_image_reference")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A mapping of tags which should be assigned to this Virtual Machine Scale Set.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="terminateNotification")
def terminate_notification(self) -> pulumi.Output['outputs.WindowsVirtualMachineScaleSetTerminateNotification']:
"""
A `terminate_notification` block as defined below.
"""
return pulumi.get(self, "terminate_notification")
@property
@pulumi.getter
def timezone(self) -> pulumi.Output[Optional[str]]:
"""
Specifies the time zone of the virtual machine, [the possible values are defined here](https://jackstromberg.com/2017/01/list-of-time-zones-consumed-by-azure/).
"""
return pulumi.get(self, "timezone")
@property
@pulumi.getter(name="uniqueId")
def unique_id(self) -> pulumi.Output[str]:
"""
The Unique ID for this Windows Virtual Machine Scale Set.
"""
return pulumi.get(self, "unique_id")
@property
@pulumi.getter(name="upgradeMode")
def upgrade_mode(self) -> pulumi.Output[Optional[str]]:
"""
Specifies how Upgrades (e.g. changing the Image/SKU) should be performed to Virtual Machine Instances. Possible values are `Automatic`, `Manual` and `Rolling`. Defaults to `Manual`.
"""
return pulumi.get(self, "upgrade_mode")
@property
@pulumi.getter(name="userData")
def user_data(self) -> pulumi.Output[Optional[str]]:
"""
The Base64-Encoded User Data which should be used for this Virtual Machine Scale Set.
"""
return pulumi.get(self, "user_data")
@property
@pulumi.getter(name="vtpmEnabled")
def vtpm_enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Specifies if vTPM (Virtual Trusted Plaform Module) and Trusted Launch is enabled for the Virtual Machine. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "vtpm_enabled")
@property
@pulumi.getter(name="winrmListeners")
def winrm_listeners(self) -> pulumi.Output[Optional[Sequence['outputs.WindowsVirtualMachineScaleSetWinrmListener']]]:
"""
One or more `winrm_listener` blocks as defined below.
"""
return pulumi.get(self, "winrm_listeners")
@property
@pulumi.getter(name="zoneBalance")
def zone_balance(self) -> pulumi.Output[Optional[bool]]:
"""
Should the Virtual Machines in this Scale Set be strictly evenly distributed across Availability Zones? Defaults to `false`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "zone_balance")
@property
@pulumi.getter
def zones(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
A list of Availability Zones in which the Virtual Machines in this Scale Set should be created in. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "zones")
| 63.160749 | 473 | 0.712159 | 19,136 | 161,881 | 5.83356 | 0.027069 | 0.074003 | 0.072166 | 0.032124 | 0.972606 | 0.96724 | 0.958784 | 0.951644 | 0.949235 | 0.939963 | 0 | 0.002507 | 0.196614 | 161,881 | 2,562 | 474 | 63.185402 | 0.855843 | 0.410691 | 0 | 0.894947 | 1 | 0 | 0.192156 | 0.134134 | 0 | 0 | 0 | 0 | 0 | 1 | 0.169548 | false | 0.018617 | 0.004654 | 0 | 0.275931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bc6b6cd5a72174b5f91801b7b65a207ce4d5c261 | 7,232 | py | Python | tests/test_lnk_frame.py | dwhall/HeyMac | 18c86b5855d65d3d6ffe9ee9d5913396cee40d81 | [
"MIT"
] | 5 | 2018-03-25T03:31:52.000Z | 2021-11-23T03:17:01.000Z | tests/test_lnk_frame.py | dwhall/HeyMac | 18c86b5855d65d3d6ffe9ee9d5913396cee40d81 | [
"MIT"
] | 10 | 2018-05-18T20:53:45.000Z | 2021-08-22T03:01:31.000Z | tests/test_lnk_frame.py | dwhall/HeyMac | 18c86b5855d65d3d6ffe9ee9d5913396cee40d81 | [
"MIT"
] | 1 | 2020-06-14T16:31:34.000Z | 2020-06-14T16:31:34.000Z | #!/usr/bin/env python3
import unittest
from heymac.lnk import *
class TestHeyMacFrame(unittest.TestCase):
"""Tests the HeymacFrame building and serializing.
"""
def test_mac(self):
f = HeymacFrame(HeymacFramePidType.CSMA)
b = bytes(f)
self.assertEqual(b, b"\xE4\x00")
f = HeymacFrame.parse(b)
self.assertTrue(f.is_heymac())
self.assertEqual(f.fctl, 0)
self.assertIsNone(f.netid)
self.assertIsNone(f.daddr)
self.assertIsNone(f.saddr)
self.assertIsNone(f.payld)
self.assertIsNone(f.hops)
self.assertIsNone(f.taddr)
def test_not_mac(self):
b = b"\x00\x00"
# expect that parser raises an exception due to invalid frame header
self.assertRaises(HeymacFrameError, HeymacFrame.parse, b)
def test_csma(self):
f = HeymacFrame(HeymacFramePidType.CSMA)
b = bytes(f)
self.assertEqual(b, b"\xE4\x00")
f = HeymacFrame.parse(b)
self.assertEqual(f.fctl, 0x00)
self.assertIsNone(f.netid)
self.assertIsNone(f.daddr)
self.assertIsNone(f.saddr)
self.assertIsNone(f.payld)
self.assertIsNone(f.hops)
self.assertIsNone(f.taddr)
def test_min_payld(self):
f = HeymacFrame(HeymacFramePidType.CSMA)
f.payld = HeymacCmdTxt(msg=b"ABCD")
b = bytes(f)
self.assertEqual(b, b"\xE4\x00\x81ABCD")
f = HeymacFrame.parse(b)
self.assertEqual(f.fctl, 0)
self.assertIsNone(f.netid)
self.assertIsNone(f.daddr)
self.assertIsNone(f.saddr)
self.assertIsInstance(f.payld, HeymacCmdTxt)
self.assertIsNone(f.hops)
self.assertIsNone(f.taddr)
def test_saddr64b(self):
f = HeymacFrame(HeymacFramePidType.CSMA)
f.saddr = b"\x01\x02\x03\x04\x05\x06\x07\x08"
b = bytes(f)
self.assertEqual(b, b"\xE4\x44\x01\x02\x03\x04\x05\x06\x07\x08")
f = HeymacFrame.parse(b)
self.assertEqual(f.fctl, 0x44)
self.assertIsNone(f.netid)
self.assertIsNone(f.daddr)
self.assertEqual(
f.saddr,
b"\x01\x02\x03\x04\x05\x06\x07\x08")
self.assertIsNone(f.payld)
self.assertIsNone(f.hops)
self.assertIsNone(f.taddr)
def test_saddr64b_daddr64b(self):
f = HeymacFrame(HeymacFramePidType.CSMA)
f.daddr = b"\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8"
f.saddr = b"\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8"
f.payld = HeymacCmdTxt(msg=b"hi")
b = bytes(f)
self.assertEqual(b, b"\xE4\x54\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\x81hi")
f = HeymacFrame.parse(b)
self.assertEqual(f.fctl, 0x54)
self.assertIsNone(f.netid)
self.assertEqual(f.daddr, b"\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8")
self.assertEqual(f.saddr, b"\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8")
self.assertIsInstance(f.payld, HeymacCmdTxt)
self.assertIsNone(f.hops)
self.assertIsNone(f.taddr)
def test_saddr16b_daddr16b(self):
f = HeymacFrame(HeymacFramePidType.CSMA)
f.daddr = b"\xd1\xd2"
f.saddr = b"\xc1\xc2"
f.payld = HeymacCmdTxt(msg=b"hello world")
b = bytes(f)
self.assertEqual(b, b"\xE4\x14\xd1\xd2\xc1\xc2\x81hello world")
f = HeymacFrame.parse(b)
self.assertEqual(f.fctl, 0x14)
self.assertIsNone(f.netid)
self.assertEqual(f.daddr, b"\xd1\xd2")
self.assertEqual(f.saddr, b"\xc1\xc2")
self.assertIsInstance(f.payld, HeymacCmdTxt)
self.assertIsNone(f.hops)
self.assertIsNone(f.taddr)
def test_netid_daddr(self):
f = HeymacFrame(HeymacFramePidType.CSMA)
f.netid = b"\x80\xA5"
f.daddr = b"\xd1\xd2"
f.payld = HeymacCmdTxt(msg=b"data")
b = bytes(f)
self.assertEqual(b, b"\xE4\x30\x80\xa5\xd1\xd2\x81data")
f = HeymacFrame.parse(b)
self.assertEqual(f.fctl, 0x30)
self.assertEqual(f.netid, b"\x80\xA5")
self.assertEqual(f.daddr, b"\xd1\xd2")
self.assertIsNone(f.saddr)
self.assertIsInstance(f.payld, HeymacCmdTxt)
self.assertIsNone(f.hops)
self.assertIsNone(f.taddr)
def test_saddr16b_daddr16b_arg(self):
f = HeymacFrame(HeymacFramePidType.CSMA,
daddr=b"\xd1\xd2",
saddr=b"\xc1\xc2",
payld=HeymacCmdTxt(msg=b"hello world"))
b = bytes(f)
self.assertEqual(b, b"\xE4\x14\xd1\xd2\xc1\xc2\x81hello world")
f = HeymacFrame.parse(b)
self.assertEqual(f.fctl, 0x14)
self.assertIsNone(f.netid)
self.assertEqual(f.daddr, b"\xd1\xd2")
self.assertEqual(f.saddr, b"\xc1\xc2")
self.assertIsInstance(f.payld, HeymacCmdTxt)
self.assertIsNone(f.hops)
self.assertIsNone(f.taddr)
def _test_invalid_field(self):
f = HeymacFrame(HeymacFramePidType.CSMA,
daddr=b"\xd1\xd2",
saddr=b"\xc1\xc2",
timmy=b"timmy")
def test_invalid_field(self):
self.assertRaises(HeymacFrameError, self._test_invalid_field)
def test_hie(self):
f = HeymacFrame(HeymacFramePidType.CSMA,
ies=HeymacIeSequence(
HeymacHIeSqncNmbr(42),
HeymacHIeTerm(),
HeymacPIeTerm()))
b = bytes(f)
self.assertEqual(b, b"\xE4\x08\x81\x00\x2A\x00\x20")
f = HeymacFrame.parse(b)
self.assertTrue(f.is_heymac())
self.assertEqual(f.fctl, 0x08)
self.assertIsNone(f.netid)
self.assertIsNone(f.daddr)
self.assertEqual(type(f.ies), bytes)
self.assertIsNone(f.saddr)
self.assertIsNone(f.payld)
self.assertIsNone(f.hops)
self.assertIsNone(f.taddr)
def test_pie(self):
f = HeymacFrame(HeymacFramePidType.CSMA,
ies=HeymacIeSequence(
HeymacPIeMic(5, 4),
HeymacPIeTerm()))
b = bytes(f)
self.assertEqual(b, b"\xE4\x08\xA3\x05\x04\x20")
f = HeymacFrame.parse(b)
self.assertTrue(f.is_heymac())
self.assertEqual(f.fctl, 0x08)
self.assertIsNone(f.netid)
self.assertIsNone(f.daddr)
self.assertEqual(type(f.ies), bytes)
self.assertIsNone(f.saddr)
self.assertIsNone(f.payld)
self.assertIsNone(f.hops)
self.assertIsNone(f.taddr)
def test_available_payld_sz(self):
f = HeymacFrame(HeymacFramePidType.CSMA,
saddr=b"\x10\x00",
hops=4,
taddr=b"\x11\x00")
avail1 = f.available_payld_sz()
b = bytes(f)
self.assertEqual(b, b"\xE4\x06\x10\x00\x04\x11\x00")
# Add IEs to the frame
ies=HeymacIeSequence(
HeymacPIeFrag0(500, 21),
HeymacPIeTerm())
f.ies = ies
avail2 = f.available_payld_sz()
self.assertEqual(avail1, avail2 + len(ies))
if __name__ == '__main__':
unittest.main()
| 31.719298 | 110 | 0.587942 | 886 | 7,232 | 4.749436 | 0.160271 | 0.174905 | 0.185837 | 0.096958 | 0.819154 | 0.777567 | 0.746435 | 0.715779 | 0.631179 | 0.624049 | 0 | 0.053313 | 0.284154 | 7,232 | 227 | 111 | 31.859031 | 0.759513 | 0.0224 | 0 | 0.649718 | 0 | 0.00565 | 0.100963 | 0.069102 | 0 | 0 | 0.004531 | 0 | 0.502825 | 1 | 0.079096 | false | 0 | 0.011299 | 0 | 0.096045 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bca22336e1b33def26346c0c837653301f796760 | 16,767 | py | Python | messenger_pb2_grpc.py | Oanikulin/mafia_game | 274cc154a7ac4ac698186153128b890dc0a96e13 | [
"Apache-2.0"
] | null | null | null | messenger_pb2_grpc.py | Oanikulin/mafia_game | 274cc154a7ac4ac698186153128b890dc0a96e13 | [
"Apache-2.0"
] | null | null | null | messenger_pb2_grpc.py | Oanikulin/mafia_game | 274cc154a7ac4ac698186153128b890dc0a96e13 | [
"Apache-2.0"
] | null | null | null | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
import messenger_pb2 as messenger__pb2
class MaphiaStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.WaitForGame = channel.unary_stream(
'/Maphia/WaitForGame',
request_serializer=messenger__pb2.InitClient.SerializeToString,
response_deserializer=messenger__pb2.InitClient.FromString,
)
self.GetClients = channel.unary_stream(
'/Maphia/GetClients',
request_serializer=messenger__pb2.Empty.SerializeToString,
response_deserializer=messenger__pb2.InitClient.FromString,
)
self.Connect = channel.unary_unary(
'/Maphia/Connect',
request_serializer=messenger__pb2.InitClient.SerializeToString,
response_deserializer=messenger__pb2.Empty.FromString,
)
self.EndDay = channel.unary_unary(
'/Maphia/EndDay',
request_serializer=messenger__pb2.InitClient.SerializeToString,
response_deserializer=messenger__pb2.Empty.FromString,
)
self.VoteKill = channel.unary_unary(
'/Maphia/VoteKill',
request_serializer=messenger__pb2.InitClient.SerializeToString,
response_deserializer=messenger__pb2.Empty.FromString,
)
self.KillNight = channel.unary_unary(
'/Maphia/KillNight',
request_serializer=messenger__pb2.InitClient.SerializeToString,
response_deserializer=messenger__pb2.Empty.FromString,
)
self.Check = channel.unary_unary(
'/Maphia/Check',
request_serializer=messenger__pb2.InitClient.SerializeToString,
response_deserializer=messenger__pb2.CheckResult.FromString,
)
self.PostMessage = channel.unary_unary(
'/Maphia/PostMessage',
request_serializer=messenger__pb2.ChatMessage.SerializeToString,
response_deserializer=messenger__pb2.Empty.FromString,
)
self.GetMessages = channel.unary_stream(
'/Maphia/GetMessages',
request_serializer=messenger__pb2.InitClient.SerializeToString,
response_deserializer=messenger__pb2.ChatMessage.FromString,
)
self.GetRole = channel.unary_unary(
'/Maphia/GetRole',
request_serializer=messenger__pb2.InitClient.SerializeToString,
response_deserializer=messenger__pb2.Universal.FromString,
)
self.StartDay = channel.unary_stream(
'/Maphia/StartDay',
request_serializer=messenger__pb2.InitClient.SerializeToString,
response_deserializer=messenger__pb2.InitClient.FromString,
)
class MaphiaServicer(object):
"""Missing associated documentation comment in .proto file."""
def WaitForGame(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetClients(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Connect(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def EndDay(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def VoteKill(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def KillNight(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Check(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def PostMessage(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetMessages(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetRole(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def StartDay(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_MaphiaServicer_to_server(servicer, server):
rpc_method_handlers = {
'WaitForGame': grpc.unary_stream_rpc_method_handler(
servicer.WaitForGame,
request_deserializer=messenger__pb2.InitClient.FromString,
response_serializer=messenger__pb2.InitClient.SerializeToString,
),
'GetClients': grpc.unary_stream_rpc_method_handler(
servicer.GetClients,
request_deserializer=messenger__pb2.Empty.FromString,
response_serializer=messenger__pb2.InitClient.SerializeToString,
),
'Connect': grpc.unary_unary_rpc_method_handler(
servicer.Connect,
request_deserializer=messenger__pb2.InitClient.FromString,
response_serializer=messenger__pb2.Empty.SerializeToString,
),
'EndDay': grpc.unary_unary_rpc_method_handler(
servicer.EndDay,
request_deserializer=messenger__pb2.InitClient.FromString,
response_serializer=messenger__pb2.Empty.SerializeToString,
),
'VoteKill': grpc.unary_unary_rpc_method_handler(
servicer.VoteKill,
request_deserializer=messenger__pb2.InitClient.FromString,
response_serializer=messenger__pb2.Empty.SerializeToString,
),
'KillNight': grpc.unary_unary_rpc_method_handler(
servicer.KillNight,
request_deserializer=messenger__pb2.InitClient.FromString,
response_serializer=messenger__pb2.Empty.SerializeToString,
),
'Check': grpc.unary_unary_rpc_method_handler(
servicer.Check,
request_deserializer=messenger__pb2.InitClient.FromString,
response_serializer=messenger__pb2.CheckResult.SerializeToString,
),
'PostMessage': grpc.unary_unary_rpc_method_handler(
servicer.PostMessage,
request_deserializer=messenger__pb2.ChatMessage.FromString,
response_serializer=messenger__pb2.Empty.SerializeToString,
),
'GetMessages': grpc.unary_stream_rpc_method_handler(
servicer.GetMessages,
request_deserializer=messenger__pb2.InitClient.FromString,
response_serializer=messenger__pb2.ChatMessage.SerializeToString,
),
'GetRole': grpc.unary_unary_rpc_method_handler(
servicer.GetRole,
request_deserializer=messenger__pb2.InitClient.FromString,
response_serializer=messenger__pb2.Universal.SerializeToString,
),
'StartDay': grpc.unary_stream_rpc_method_handler(
servicer.StartDay,
request_deserializer=messenger__pb2.InitClient.FromString,
response_serializer=messenger__pb2.InitClient.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'Maphia', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class Maphia(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def WaitForGame(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(request, target, '/Maphia/WaitForGame',
messenger__pb2.InitClient.SerializeToString,
messenger__pb2.InitClient.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetClients(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(request, target, '/Maphia/GetClients',
messenger__pb2.Empty.SerializeToString,
messenger__pb2.InitClient.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Connect(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/Maphia/Connect',
messenger__pb2.InitClient.SerializeToString,
messenger__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def EndDay(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/Maphia/EndDay',
messenger__pb2.InitClient.SerializeToString,
messenger__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def VoteKill(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/Maphia/VoteKill',
messenger__pb2.InitClient.SerializeToString,
messenger__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def KillNight(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/Maphia/KillNight',
messenger__pb2.InitClient.SerializeToString,
messenger__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Check(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/Maphia/Check',
messenger__pb2.InitClient.SerializeToString,
messenger__pb2.CheckResult.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def PostMessage(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/Maphia/PostMessage',
messenger__pb2.ChatMessage.SerializeToString,
messenger__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetMessages(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(request, target, '/Maphia/GetMessages',
messenger__pb2.InitClient.SerializeToString,
messenger__pb2.ChatMessage.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetRole(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/Maphia/GetRole',
messenger__pb2.InitClient.SerializeToString,
messenger__pb2.Universal.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def StartDay(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(request, target, '/Maphia/StartDay',
messenger__pb2.InitClient.SerializeToString,
messenger__pb2.InitClient.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 42.234257 | 87 | 0.633685 | 1,446 | 16,767 | 7.089212 | 0.072614 | 0.079602 | 0.077261 | 0.079895 | 0.845576 | 0.8287 | 0.813774 | 0.760023 | 0.740611 | 0.72949 | 0 | 0.005701 | 0.288662 | 16,767 | 396 | 88 | 42.340909 | 0.853777 | 0.061311 | 0 | 0.663743 | 1 | 0 | 0.06186 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070175 | false | 0 | 0.005848 | 0.032164 | 0.116959 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bcae6d5f17347afad828ab1c8ecc13ff40876ae0 | 4,500 | py | Python | src/pymortests/algorithms/gram_schmidt.py | lbalicki/pymor | 8de5f16499b95a48c6332449677540383548dc3e | [
"Unlicense"
] | null | null | null | src/pymortests/algorithms/gram_schmidt.py | lbalicki/pymor | 8de5f16499b95a48c6332449677540383548dc3e | [
"Unlicense"
] | null | null | null | src/pymortests/algorithms/gram_schmidt.py | lbalicki/pymor | 8de5f16499b95a48c6332449677540383548dc3e | [
"Unlicense"
] | 1 | 2021-03-03T15:18:24.000Z | 2021-03-03T15:18:24.000Z | # This file is part of the pyMOR project (http://www.pymor.org).
# Copyright 2013-2019 pyMOR developers and contributors. All rights reserved.
# License: BSD 2-Clause License (http://opensource.org/licenses/BSD-2-Clause)
import numpy as np
from pymor.algorithms.basic import almost_equal
from pymor.algorithms.gram_schmidt import gram_schmidt, gram_schmidt_biorth
from pymortests.fixtures.operator import operator_with_arrays_and_products
from pymortests.fixtures.vectorarray import vector_array, vector_array_without_reserve
def test_gram_schmidt(vector_array):
U = vector_array
V = U.copy()
onb = gram_schmidt(U, copy=True)
assert np.all(almost_equal(U, V))
assert np.allclose(onb.dot(onb), np.eye(len(onb)))
assert np.all(almost_equal(U, onb.lincomb(onb.dot(U).T), rtol=1e-13))
onb2 = gram_schmidt(U, copy=False)
assert np.all(almost_equal(onb, onb2))
assert np.all(almost_equal(onb, U))
def test_gram_schmidt_with_R(vector_array):
U = vector_array
V = U.copy()
onb, R = gram_schmidt(U, return_R=True, copy=True)
assert np.all(almost_equal(U, V))
assert np.allclose(onb.dot(onb), np.eye(len(onb)))
assert np.all(almost_equal(U, onb.lincomb(onb.dot(U).T), rtol=1e-13))
assert np.all(almost_equal(V, onb.lincomb(R.T)))
onb2, R2 = gram_schmidt(U, return_R=True, copy=False)
assert np.all(almost_equal(onb, onb2))
assert np.all(R == R2)
assert np.all(almost_equal(onb, U))
def test_gram_schmidt_with_product(operator_with_arrays_and_products):
_, _, U, _, p, _ = operator_with_arrays_and_products
V = U.copy()
onb = gram_schmidt(U, product=p, copy=True)
assert np.all(almost_equal(U, V))
assert np.allclose(p.apply2(onb, onb), np.eye(len(onb)))
assert np.all(almost_equal(U, onb.lincomb(p.apply2(onb, U).T), rtol=1e-13))
onb2 = gram_schmidt(U, product=p, copy=False)
assert np.all(almost_equal(onb, onb2))
assert np.all(almost_equal(onb, U))
def test_gram_schmidt_with_product_and_R(operator_with_arrays_and_products):
_, _, U, _, p, _ = operator_with_arrays_and_products
V = U.copy()
onb, R = gram_schmidt(U, product=p, return_R=True, copy=True)
assert np.all(almost_equal(U, V))
assert np.allclose(p.apply2(onb, onb), np.eye(len(onb)))
assert np.all(almost_equal(U, onb.lincomb(p.apply2(onb, U).T), rtol=1e-13))
assert np.all(almost_equal(U, onb.lincomb(R.T)))
onb2, R2 = gram_schmidt(U, product=p, return_R=True, copy=False)
assert np.all(almost_equal(onb, onb2))
assert np.all(R == R2)
assert np.all(almost_equal(onb, U))
def test_gram_schmidt_biorth(vector_array):
U = vector_array
if U.dim < 2:
return
l = len(U) // 2
l = min((l, U.dim - 1))
if l < 1:
return
U1 = U[:l].copy()
U2 = U[l:2 * l].copy()
V1 = U1.copy()
V2 = U2.copy()
A1, A2 = gram_schmidt_biorth(U1, U2, copy=True)
assert np.all(almost_equal(U1, V1))
assert np.all(almost_equal(U2, V2))
assert np.allclose(A2.dot(A1), np.eye(len(A1)))
c = np.linalg.cond(A1.to_numpy()) * np.linalg.cond(A2.to_numpy())
assert np.all(almost_equal(U1, A1.lincomb(A2.dot(U1).T), rtol=c * 1e-14))
assert np.all(almost_equal(U2, A2.lincomb(A1.dot(U2).T), rtol=c * 1e-14))
B1, B2 = gram_schmidt_biorth(U1, U2, copy=False)
assert np.all(almost_equal(A1, B1))
assert np.all(almost_equal(A2, B2))
assert np.all(almost_equal(A1, U1))
assert np.all(almost_equal(A2, U2))
def test_gram_schmidt_biorth_with_product(operator_with_arrays_and_products):
_, _, U, _, p, _ = operator_with_arrays_and_products
if U.dim < 2:
return
l = len(U) // 2
l = min((l, U.dim - 1))
if l < 1:
return
U1 = U[:l].copy()
U2 = U[l:2 * l].copy()
V1 = U1.copy()
V2 = U2.copy()
A1, A2 = gram_schmidt_biorth(U1, U2, product=p, copy=True)
assert np.all(almost_equal(U1, V1))
assert np.all(almost_equal(U2, V2))
assert np.allclose(p.apply2(A2, A1), np.eye(len(A1)))
c = np.linalg.cond(A1.to_numpy()) * np.linalg.cond(p.apply(A2).to_numpy())
assert np.all(almost_equal(U1, A1.lincomb(p.apply2(A2, U1).T), rtol=c * 1e-14))
assert np.all(almost_equal(U2, A2.lincomb(p.apply2(A1, U2).T), rtol=c * 1e-14))
B1, B2 = gram_schmidt_biorth(U1, U2, product=p, copy=False)
assert np.all(almost_equal(A1, B1))
assert np.all(almost_equal(A2, B2))
assert np.all(almost_equal(A1, U1))
assert np.all(almost_equal(A2, U2))
| 35.433071 | 86 | 0.671778 | 775 | 4,500 | 3.723871 | 0.12 | 0.116424 | 0.137214 | 0.200277 | 0.838877 | 0.808039 | 0.808039 | 0.799723 | 0.794525 | 0.719681 | 0 | 0.038026 | 0.176 | 4,500 | 126 | 87 | 35.714286 | 0.740291 | 0.047556 | 0 | 0.659794 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.43299 | 1 | 0.061856 | false | 0 | 0.051546 | 0 | 0.154639 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bcae9a39f0b145b4f3c66fd37c15d6789cabca39 | 213 | py | Python | spirit/signals/handlers/__init__.py | rterehov/Spirit | 515894001da9d499852b7ebde25892d290e26c38 | [
"MIT"
] | null | null | null | spirit/signals/handlers/__init__.py | rterehov/Spirit | 515894001da9d499852b7ebde25892d290e26c38 | [
"MIT"
] | null | null | null | spirit/signals/handlers/__init__.py | rterehov/Spirit | 515894001da9d499852b7ebde25892d290e26c38 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from . import comment
from . import comment_bookmark
from . import comment_history
from . import topic
from . import topic_notification
from . import topic_poll
from . import topic_unread
| 21.3 | 32 | 0.765258 | 29 | 213 | 5.448276 | 0.413793 | 0.443038 | 0.379747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005587 | 0.159624 | 213 | 9 | 33 | 23.666667 | 0.877095 | 0.098592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bcbfcb85976d3b0de219cf09b6fbd452e7511a48 | 130 | py | Python | ast/testdata/decorator.py | MaxTurchin/pycopy-lib | d7a69fc2a28031e2ca475c29239f715c1809d8cc | [
"PSF-2.0"
] | 126 | 2019-07-19T14:42:41.000Z | 2022-03-21T22:22:19.000Z | ast/testdata/decorator.py | MaxTurchin/pycopy-lib | d7a69fc2a28031e2ca475c29239f715c1809d8cc | [
"PSF-2.0"
] | 38 | 2019-08-28T01:46:31.000Z | 2022-03-17T05:46:51.000Z | ast/testdata/decorator.py | MaxTurchin/pycopy-lib | d7a69fc2a28031e2ca475c29239f715c1809d8cc | [
"PSF-2.0"
] | 55 | 2019-08-02T09:32:33.000Z | 2021-12-22T11:25:51.000Z | @decor
def foo():
pass
@decor()
def foo():
pass
@decor(1, 2)
def foo():
pass
@clsdecor(1)
class Foo(Bar):
pass
| 8.125 | 15 | 0.546154 | 20 | 130 | 3.55 | 0.45 | 0.253521 | 0.422535 | 0.422535 | 0.492958 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031915 | 0.276923 | 130 | 15 | 16 | 8.666667 | 0.723404 | 0 | 0 | 0.583333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0.333333 | 0 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
4c0a93b284f3ffe8adc0b08e76194e7516470408 | 36,765 | py | Python | napalm_yang/models/openconfig/network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 64 | 2016-10-20T15:47:18.000Z | 2021-11-11T11:57:32.000Z | napalm_yang/models/openconfig/network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 126 | 2016-10-05T10:36:14.000Z | 2019-05-15T08:43:23.000Z | napalm_yang/models/openconfig/network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 63 | 2016-11-07T15:23:08.000Z | 2021-09-22T14:41:16.000Z | # -*- coding: utf-8 -*-
from operator import attrgetter
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType
from pyangbind.lib.yangtypes import RestrictedClassType
from pyangbind.lib.yangtypes import TypedListType
from pyangbind.lib.yangtypes import YANGBool
from pyangbind.lib.yangtypes import YANGListType
from pyangbind.lib.yangtypes import YANGDynClass
from pyangbind.lib.yangtypes import ReferenceType
from pyangbind.lib.base import PybindBase
from collections import OrderedDict
from decimal import Decimal
from bitarray import bitarray
import six
# PY3 support of some PY2 keywords (needs improved)
if six.PY3:
import builtins as __builtin__
long = int
elif six.PY2:
import __builtin__
class state(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance - based on the path /network-instances/network-instance/protocols/protocol/bgp/peer-groups/peer-group/graceful-restart/state. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: State information associated with graceful-restart
"""
__slots__ = (
"_path_helper",
"_extmethods",
"__enabled",
"__restart_time",
"__stale_routes_time",
"__helper_only",
)
_yang_name = "state"
_pybind_generated_by = "container"
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__enabled = YANGDynClass(
base=YANGBool,
is_leaf=True,
yang_name="enabled",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
self.__restart_time = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..4096"]},
),
is_leaf=True,
yang_name="restart-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=False,
)
self.__stale_routes_time = YANGDynClass(
base=RestrictedPrecisionDecimalType(precision=2),
is_leaf=True,
yang_name="stale-routes-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="decimal64",
is_config=False,
)
self.__helper_only = YANGDynClass(
base=YANGBool,
is_leaf=True,
yang_name="helper-only",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path() + [self._yang_name]
else:
return [
"network-instances",
"network-instance",
"protocols",
"protocol",
"bgp",
"peer-groups",
"peer-group",
"graceful-restart",
"state",
]
def _get_enabled(self):
"""
Getter method for enabled, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/enabled (boolean)
YANG Description: Enable or disable the graceful-restart capability.
"""
return self.__enabled
def _set_enabled(self, v, load=False):
"""
Setter method for enabled, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/enabled (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_enabled is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_enabled() directly.
YANG Description: Enable or disable the graceful-restart capability.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=YANGBool,
is_leaf=True,
yang_name="enabled",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """enabled must be of a type compatible with boolean""",
"defined-type": "boolean",
"generated-type": """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='boolean', is_config=False)""",
}
)
self.__enabled = t
if hasattr(self, "_set"):
self._set()
def _unset_enabled(self):
self.__enabled = YANGDynClass(
base=YANGBool,
is_leaf=True,
yang_name="enabled",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
def _get_restart_time(self):
"""
Getter method for restart_time, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/restart_time (uint16)
YANG Description: Estimated time (in seconds) for the local BGP speaker to
restart a session. This value is advertise in the graceful
restart BGP capability. This is a 12-bit value, referred to
as Restart Time in RFC4724. Per RFC4724, the suggested
default value is <= the hold-time value.
"""
return self.__restart_time
def _set_restart_time(self, v, load=False):
"""
Setter method for restart_time, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/restart_time (uint16)
If this variable is read-only (config: false) in the
source YANG file, then _set_restart_time is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_restart_time() directly.
YANG Description: Estimated time (in seconds) for the local BGP speaker to
restart a session. This value is advertise in the graceful
restart BGP capability. This is a 12-bit value, referred to
as Restart Time in RFC4724. Per RFC4724, the suggested
default value is <= the hold-time value.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..65535"]},
int_size=16,
),
restriction_dict={"range": ["0..4096"]},
),
is_leaf=True,
yang_name="restart-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """restart_time must be of a type compatible with uint16""",
"defined-type": "uint16",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['0..4096']}), is_leaf=True, yang_name="restart-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint16', is_config=False)""",
}
)
self.__restart_time = t
if hasattr(self, "_set"):
self._set()
def _unset_restart_time(self):
self.__restart_time = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..4096"]},
),
is_leaf=True,
yang_name="restart-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=False,
)
def _get_stale_routes_time(self):
"""
Getter method for stale_routes_time, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/stale_routes_time (decimal64)
YANG Description: An upper-bound on the time thate stale routes will be
retained by a router after a session is restarted. If an
End-of-RIB (EOR) marker is received prior to this timer
expiring stale-routes will be flushed upon its receipt - if
no EOR is received, then when this timer expires stale paths
will be purged. This timer is referred to as the
Selection_Deferral_Timer in RFC4724
"""
return self.__stale_routes_time
def _set_stale_routes_time(self, v, load=False):
"""
Setter method for stale_routes_time, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/stale_routes_time (decimal64)
If this variable is read-only (config: false) in the
source YANG file, then _set_stale_routes_time is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_stale_routes_time() directly.
YANG Description: An upper-bound on the time thate stale routes will be
retained by a router after a session is restarted. If an
End-of-RIB (EOR) marker is received prior to this timer
expiring stale-routes will be flushed upon its receipt - if
no EOR is received, then when this timer expires stale paths
will be purged. This timer is referred to as the
Selection_Deferral_Timer in RFC4724
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedPrecisionDecimalType(precision=2),
is_leaf=True,
yang_name="stale-routes-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="decimal64",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """stale_routes_time must be of a type compatible with decimal64""",
"defined-type": "decimal64",
"generated-type": """YANGDynClass(base=RestrictedPrecisionDecimalType(precision=2), is_leaf=True, yang_name="stale-routes-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='decimal64', is_config=False)""",
}
)
self.__stale_routes_time = t
if hasattr(self, "_set"):
self._set()
def _unset_stale_routes_time(self):
self.__stale_routes_time = YANGDynClass(
base=RestrictedPrecisionDecimalType(precision=2),
is_leaf=True,
yang_name="stale-routes-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="decimal64",
is_config=False,
)
def _get_helper_only(self):
"""
Getter method for helper_only, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/helper_only (boolean)
YANG Description: Enable graceful-restart in helper mode only. When this
leaf is set, the local system does not retain forwarding
its own state during a restart, but supports procedures
for the receiving speaker, as defined in RFC4724.
"""
return self.__helper_only
def _set_helper_only(self, v, load=False):
"""
Setter method for helper_only, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/helper_only (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_helper_only is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_helper_only() directly.
YANG Description: Enable graceful-restart in helper mode only. When this
leaf is set, the local system does not retain forwarding
its own state during a restart, but supports procedures
for the receiving speaker, as defined in RFC4724.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=YANGBool,
is_leaf=True,
yang_name="helper-only",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """helper_only must be of a type compatible with boolean""",
"defined-type": "boolean",
"generated-type": """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="helper-only", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='boolean', is_config=False)""",
}
)
self.__helper_only = t
if hasattr(self, "_set"):
self._set()
def _unset_helper_only(self):
self.__helper_only = YANGDynClass(
base=YANGBool,
is_leaf=True,
yang_name="helper-only",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
enabled = __builtin__.property(_get_enabled)
restart_time = __builtin__.property(_get_restart_time)
stale_routes_time = __builtin__.property(_get_stale_routes_time)
helper_only = __builtin__.property(_get_helper_only)
_pyangbind_elements = OrderedDict(
[
("enabled", enabled),
("restart_time", restart_time),
("stale_routes_time", stale_routes_time),
("helper_only", helper_only),
]
)
class state(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance-l2 - based on the path /network-instances/network-instance/protocols/protocol/bgp/peer-groups/peer-group/graceful-restart/state. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: State information associated with graceful-restart
"""
__slots__ = (
"_path_helper",
"_extmethods",
"__enabled",
"__restart_time",
"__stale_routes_time",
"__helper_only",
)
_yang_name = "state"
_pybind_generated_by = "container"
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__enabled = YANGDynClass(
base=YANGBool,
is_leaf=True,
yang_name="enabled",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
self.__restart_time = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..4096"]},
),
is_leaf=True,
yang_name="restart-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=False,
)
self.__stale_routes_time = YANGDynClass(
base=RestrictedPrecisionDecimalType(precision=2),
is_leaf=True,
yang_name="stale-routes-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="decimal64",
is_config=False,
)
self.__helper_only = YANGDynClass(
base=YANGBool,
is_leaf=True,
yang_name="helper-only",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path() + [self._yang_name]
else:
return [
"network-instances",
"network-instance",
"protocols",
"protocol",
"bgp",
"peer-groups",
"peer-group",
"graceful-restart",
"state",
]
def _get_enabled(self):
"""
Getter method for enabled, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/enabled (boolean)
YANG Description: Enable or disable the graceful-restart capability.
"""
return self.__enabled
def _set_enabled(self, v, load=False):
"""
Setter method for enabled, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/enabled (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_enabled is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_enabled() directly.
YANG Description: Enable or disable the graceful-restart capability.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=YANGBool,
is_leaf=True,
yang_name="enabled",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """enabled must be of a type compatible with boolean""",
"defined-type": "boolean",
"generated-type": """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='boolean', is_config=False)""",
}
)
self.__enabled = t
if hasattr(self, "_set"):
self._set()
def _unset_enabled(self):
self.__enabled = YANGDynClass(
base=YANGBool,
is_leaf=True,
yang_name="enabled",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
def _get_restart_time(self):
"""
Getter method for restart_time, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/restart_time (uint16)
YANG Description: Estimated time (in seconds) for the local BGP speaker to
restart a session. This value is advertise in the graceful
restart BGP capability. This is a 12-bit value, referred to
as Restart Time in RFC4724. Per RFC4724, the suggested
default value is <= the hold-time value.
"""
return self.__restart_time
def _set_restart_time(self, v, load=False):
"""
Setter method for restart_time, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/restart_time (uint16)
If this variable is read-only (config: false) in the
source YANG file, then _set_restart_time is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_restart_time() directly.
YANG Description: Estimated time (in seconds) for the local BGP speaker to
restart a session. This value is advertise in the graceful
restart BGP capability. This is a 12-bit value, referred to
as Restart Time in RFC4724. Per RFC4724, the suggested
default value is <= the hold-time value.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..65535"]},
int_size=16,
),
restriction_dict={"range": ["0..4096"]},
),
is_leaf=True,
yang_name="restart-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """restart_time must be of a type compatible with uint16""",
"defined-type": "uint16",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['0..4096']}), is_leaf=True, yang_name="restart-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint16', is_config=False)""",
}
)
self.__restart_time = t
if hasattr(self, "_set"):
self._set()
def _unset_restart_time(self):
self.__restart_time = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..4096"]},
),
is_leaf=True,
yang_name="restart-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint16",
is_config=False,
)
def _get_stale_routes_time(self):
"""
Getter method for stale_routes_time, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/stale_routes_time (decimal64)
YANG Description: An upper-bound on the time thate stale routes will be
retained by a router after a session is restarted. If an
End-of-RIB (EOR) marker is received prior to this timer
expiring stale-routes will be flushed upon its receipt - if
no EOR is received, then when this timer expires stale paths
will be purged. This timer is referred to as the
Selection_Deferral_Timer in RFC4724
"""
return self.__stale_routes_time
def _set_stale_routes_time(self, v, load=False):
"""
Setter method for stale_routes_time, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/stale_routes_time (decimal64)
If this variable is read-only (config: false) in the
source YANG file, then _set_stale_routes_time is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_stale_routes_time() directly.
YANG Description: An upper-bound on the time thate stale routes will be
retained by a router after a session is restarted. If an
End-of-RIB (EOR) marker is received prior to this timer
expiring stale-routes will be flushed upon its receipt - if
no EOR is received, then when this timer expires stale paths
will be purged. This timer is referred to as the
Selection_Deferral_Timer in RFC4724
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedPrecisionDecimalType(precision=2),
is_leaf=True,
yang_name="stale-routes-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="decimal64",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """stale_routes_time must be of a type compatible with decimal64""",
"defined-type": "decimal64",
"generated-type": """YANGDynClass(base=RestrictedPrecisionDecimalType(precision=2), is_leaf=True, yang_name="stale-routes-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='decimal64', is_config=False)""",
}
)
self.__stale_routes_time = t
if hasattr(self, "_set"):
self._set()
def _unset_stale_routes_time(self):
self.__stale_routes_time = YANGDynClass(
base=RestrictedPrecisionDecimalType(precision=2),
is_leaf=True,
yang_name="stale-routes-time",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="decimal64",
is_config=False,
)
def _get_helper_only(self):
"""
Getter method for helper_only, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/helper_only (boolean)
YANG Description: Enable graceful-restart in helper mode only. When this
leaf is set, the local system does not retain forwarding
its own state during a restart, but supports procedures
for the receiving speaker, as defined in RFC4724.
"""
return self.__helper_only
def _set_helper_only(self, v, load=False):
"""
Setter method for helper_only, mapped from YANG variable /network_instances/network_instance/protocols/protocol/bgp/peer_groups/peer_group/graceful_restart/state/helper_only (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_helper_only is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_helper_only() directly.
YANG Description: Enable graceful-restart in helper mode only. When this
leaf is set, the local system does not retain forwarding
its own state during a restart, but supports procedures
for the receiving speaker, as defined in RFC4724.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=YANGBool,
is_leaf=True,
yang_name="helper-only",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """helper_only must be of a type compatible with boolean""",
"defined-type": "boolean",
"generated-type": """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="helper-only", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='boolean', is_config=False)""",
}
)
self.__helper_only = t
if hasattr(self, "_set"):
self._set()
def _unset_helper_only(self):
self.__helper_only = YANGDynClass(
base=YANGBool,
is_leaf=True,
yang_name="helper-only",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="boolean",
is_config=False,
)
enabled = __builtin__.property(_get_enabled)
restart_time = __builtin__.property(_get_restart_time)
stale_routes_time = __builtin__.property(_get_stale_routes_time)
helper_only = __builtin__.property(_get_helper_only)
_pyangbind_elements = OrderedDict(
[
("enabled", enabled),
("restart_time", restart_time),
("stale_routes_time", stale_routes_time),
("helper_only", helper_only),
]
)
| 41.921323 | 499 | 0.618142 | 4,066 | 36,765 | 5.362518 | 0.060994 | 0.059163 | 0.042378 | 0.04834 | 0.982801 | 0.972849 | 0.972849 | 0.972849 | 0.972849 | 0.972849 | 0 | 0.010198 | 0.290521 | 36,765 | 876 | 500 | 41.969178 | 0.825717 | 0.270883 | 0 | 0.880878 | 0 | 0.012539 | 0.250433 | 0.084104 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043887 | false | 0 | 0.023511 | 0 | 0.11442 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4c5434ba0f564da4445938916222a153d7ebe1b0 | 29,393 | py | Python | com/vmware/vmc/orgs/account_link_client.py | vishal-12/vsphere-automation-sdk-python | 9cf363971db77ea5a12928eecd5cf5170a7fcd8a | [
"MIT"
] | null | null | null | com/vmware/vmc/orgs/account_link_client.py | vishal-12/vsphere-automation-sdk-python | 9cf363971db77ea5a12928eecd5cf5170a7fcd8a | [
"MIT"
] | null | null | null | com/vmware/vmc/orgs/account_link_client.py | vishal-12/vsphere-automation-sdk-python | 9cf363971db77ea5a12928eecd5cf5170a7fcd8a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#---------------------------------------------------------------------------
# Copyright 2019 VMware, Inc. All rights reserved.
# AUTO GENERATED FILE -- DO NOT MODIFY!
#
# vAPI stub file for package com.vmware.vmc.orgs.account_link.
#---------------------------------------------------------------------------
"""
"""
__author__ = 'VMware, Inc.'
__docformat__ = 'restructuredtext en'
import sys
from vmware.vapi.bindings import type
from vmware.vapi.bindings.converter import TypeConverter
from vmware.vapi.bindings.enum import Enum
from vmware.vapi.bindings.error import VapiError
from vmware.vapi.bindings.struct import VapiStruct
from vmware.vapi.bindings.stub import (
ApiInterfaceStub, StubFactoryBase, VapiInterface)
from vmware.vapi.bindings.common import raise_core_exception
from vmware.vapi.data.validator import (UnionValidator, HasFieldsOfValidator)
from vmware.vapi.exception import CoreException
from vmware.vapi.lib.constants import TaskType
from vmware.vapi.lib.rest import OperationRestMetadata
class CompatibleSubnets(VapiInterface):
"""
"""
_VAPI_SERVICE_ID = 'com.vmware.vmc.orgs.account_link.compatible_subnets'
"""
Identifier of the service in canonical form.
"""
def __init__(self, config):
"""
:type config: :class:`vmware.vapi.bindings.stub.StubConfiguration`
:param config: Configuration to be used for creating the stub.
"""
VapiInterface.__init__(self, config, _CompatibleSubnetsStub)
def get(self,
org,
linked_account_id=None,
region=None,
sddc=None,
force_refresh=None,
instance_type=None,
sddc_type=None,
num_of_hosts=None,
):
"""
Gets a customer's compatible subnets for account linking
:type org: :class:`str`
:param org: Organization identifier. (required)
:type linked_account_id: :class:`str` or ``None``
:param linked_account_id: The linked connected account identifier (optional)
:type region: :class:`str` or ``None``
:param region: The region of the cloud resources to work in (optional)
:type sddc: :class:`str` or ``None``
:param sddc: sddc (optional)
:type force_refresh: :class:`bool` or ``None``
:param force_refresh: When true, forces the mappings for datacenters to be refreshed for
the connected account. (optional)
:type instance_type: :class:`str` or ``None``
:param instance_type: The server instance type to be used. (optional)
:type sddc_type: :class:`str` or ``None``
:param sddc_type: The sddc type to be used. (1NODE, SingleAZ, MultiAZ) (optional)
:type num_of_hosts: :class:`long` or ``None``
:param num_of_hosts: The number of hosts (optional)
:rtype: :class:`com.vmware.vmc.model_client.AwsCompatibleSubnets`
:return: com.vmware.vmc.model.AwsCompatibleSubnets
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthenticated`
Unauthorized
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
Forbidden
"""
return self._invoke('get',
{
'org': org,
'linked_account_id': linked_account_id,
'region': region,
'sddc': sddc,
'force_refresh': force_refresh,
'instance_type': instance_type,
'sddc_type': sddc_type,
'num_of_hosts': num_of_hosts,
})
def post(self,
org,
):
"""
Sets which subnet to use to link accounts and finishes the linking
process
:type org: :class:`str`
:param org: Organization identifier. (required)
:rtype: :class:`com.vmware.vmc.model_client.AwsSubnet`
:return: com.vmware.vmc.model.AwsSubnet
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthenticated`
Unauthorized
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
Forbidden
"""
return self._invoke('post',
{
'org': org,
})
class CompatibleSubnetsAsync(VapiInterface):
"""
"""
_VAPI_SERVICE_ID = 'com.vmware.vmc.orgs.account_link.compatible_subnets_async'
"""
Identifier of the service in canonical form.
"""
def __init__(self, config):
"""
:type config: :class:`vmware.vapi.bindings.stub.StubConfiguration`
:param config: Configuration to be used for creating the stub.
"""
VapiInterface.__init__(self, config, _CompatibleSubnetsAsyncStub)
def get(self,
org,
linked_account_id=None,
region=None,
sddc=None,
instance_type=None,
sddc_type=None,
num_of_hosts=None,
):
"""
Gets a customer's compatible subnets for account linking via a task.
The information is returned as a member of the task (found in
task.params['subnet_list_result'] when you are notified it is
complete), and it's documented under ref
/definitions/AwsCompatibleSubnets
:type org: :class:`str`
:param org: Organization identifier. (required)
:type linked_account_id: :class:`str` or ``None``
:param linked_account_id: The linked connected account identifier (optional)
:type region: :class:`str` or ``None``
:param region: The region of the cloud resources to work in (optional)
:type sddc: :class:`str` or ``None``
:param sddc: sddc (optional)
:type instance_type: :class:`str` or ``None``
:param instance_type: The server instance type to be used. (optional)
:type sddc_type: :class:`str` or ``None``
:param sddc_type: The sddc type to be used. (1NODE, SingleAZ, MultiAZ) (optional)
:type num_of_hosts: :class:`long` or ``None``
:param num_of_hosts: The number of hosts (optional)
:rtype: :class:`com.vmware.vmc.model_client.Task`
:return: com.vmware.vmc.model.Task
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthenticated`
Unauthorized
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
Forbidden
"""
return self._invoke('get',
{
'org': org,
'linked_account_id': linked_account_id,
'region': region,
'sddc': sddc,
'instance_type': instance_type,
'sddc_type': sddc_type,
'num_of_hosts': num_of_hosts,
})
def post(self,
aws_subnet,
org,
):
"""
Sets which subnet to use to link accounts and finishes the linking
process via a task
:type aws_subnet: :class:`com.vmware.vmc.model_client.AwsSubnet`
:param aws_subnet: The subnet chosen by the customer (required)
:type org: :class:`str`
:param org: Organization identifier. (required)
:rtype: :class:`com.vmware.vmc.model_client.Task`
:return: com.vmware.vmc.model.Task
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthenticated`
Unauthorized
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
Forbidden
"""
return self._invoke('post',
{
'aws_subnet': aws_subnet,
'org': org,
})
class ConnectedAccounts(VapiInterface):
"""
"""
_VAPI_SERVICE_ID = 'com.vmware.vmc.orgs.account_link.connected_accounts'
"""
Identifier of the service in canonical form.
"""
def __init__(self, config):
"""
:type config: :class:`vmware.vapi.bindings.stub.StubConfiguration`
:param config: Configuration to be used for creating the stub.
"""
VapiInterface.__init__(self, config, _ConnectedAccountsStub)
def delete(self,
org,
linked_account_path_id,
force_even_when_sddc_present=None,
):
"""
Delete a particular connected (linked) account.
:type org: :class:`str`
:param org: Organization identifier. (required)
:type linked_account_path_id: :class:`str`
:param linked_account_path_id: The linked connected account identifier (required)
:type force_even_when_sddc_present: :class:`bool` or ``None``
:param force_even_when_sddc_present: When true, forcibly removes a connected account even when SDDC's
are still linked to it. (optional)
:rtype: :class:`com.vmware.vmc.model_client.AwsCustomerConnectedAccount`
:return: com.vmware.vmc.model.AwsCustomerConnectedAccount
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthenticated`
Unauthorized
:raise: :class:`com.vmware.vapi.std.errors_client.InvalidRequest`
An invalid connected account ID was specified, or the connection
still has SDDCs active on it.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
Forbidden
"""
return self._invoke('delete',
{
'org': org,
'linked_account_path_id': linked_account_path_id,
'force_even_when_sddc_present': force_even_when_sddc_present,
})
def get(self,
org,
provider=None,
):
"""
Get a list of connected accounts
:type org: :class:`str`
:param org: Organization identifier. (required)
:type provider: :class:`str` or ``None``
:param provider: The cloud provider of the SDDC (AWS or ZeroCloud). Default value is
AWS. (optional)
:rtype: :class:`list` of :class:`com.vmware.vmc.model_client.AwsCustomerConnectedAccount`
:return:
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthenticated`
Unauthorized
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
Forbidden
"""
return self._invoke('get',
{
'org': org,
'provider': provider,
})
class MapCustomerZones(VapiInterface):
"""
"""
_VAPI_SERVICE_ID = 'com.vmware.vmc.orgs.account_link.map_customer_zones'
"""
Identifier of the service in canonical form.
"""
def __init__(self, config):
"""
:type config: :class:`vmware.vapi.bindings.stub.StubConfiguration`
:param config: Configuration to be used for creating the stub.
"""
VapiInterface.__init__(self, config, _MapCustomerZonesStub)
def post(self,
org,
map_zones_request,
):
"""
Creates a task to re-map customer's datacenters across zones.
:type org: :class:`str`
:param org: Organization identifier. (required)
:type map_zones_request: :class:`com.vmware.vmc.model_client.MapZonesRequest`
:param map_zones_request: The zones request information about who to map and what to map.
(required)
:rtype: :class:`com.vmware.vmc.model_client.Task`
:return: com.vmware.vmc.model.Task
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthenticated`
Unauthorized
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
Forbidden
"""
return self._invoke('post',
{
'org': org,
'map_zones_request': map_zones_request,
})
class SddcConnections(VapiInterface):
"""
"""
_VAPI_SERVICE_ID = 'com.vmware.vmc.orgs.account_link.sddc_connections'
"""
Identifier of the service in canonical form.
"""
def __init__(self, config):
"""
:type config: :class:`vmware.vapi.bindings.stub.StubConfiguration`
:param config: Configuration to be used for creating the stub.
"""
VapiInterface.__init__(self, config, _SddcConnectionsStub)
def get(self,
org,
sddc=None,
):
"""
Get a list of SDDC connections currently setup for the customer's
organization.
:type org: :class:`str`
:param org: Organization identifier. (required)
:type sddc: :class:`str` or ``None``
:param sddc: sddc (optional)
:rtype: :class:`list` of :class:`com.vmware.vmc.model_client.AwsSddcConnection`
:return:
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthenticated`
Unauthorized
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
Forbidden
"""
return self._invoke('get',
{
'org': org,
'sddc': sddc,
})
class _CompatibleSubnetsStub(ApiInterfaceStub):
def __init__(self, config):
# properties for get operation
get_input_type = type.StructType('operation-input', {
'org': type.StringType(),
'linked_account_id': type.OptionalType(type.StringType()),
'region': type.OptionalType(type.StringType()),
'sddc': type.OptionalType(type.StringType()),
'force_refresh': type.OptionalType(type.BooleanType()),
'instance_type': type.OptionalType(type.StringType()),
'sddc_type': type.OptionalType(type.StringType()),
'num_of_hosts': type.OptionalType(type.IntegerType()),
})
get_error_dict = {
'com.vmware.vapi.std.errors.unauthenticated':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthenticated'),
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
}
get_input_value_validator_list = [
]
get_output_validator_list = [
]
get_rest_metadata = OperationRestMetadata(
http_method='GET',
url_template='/vmc/api/orgs/{org}/account-link/compatible-subnets',
path_variables={
'org': 'org',
},
query_parameters={
'linked_account_id': 'linkedAccountId',
'region': 'region',
'sddc': 'sddc',
'force_refresh': 'forceRefresh',
'instance_type': 'instanceType',
'sddc_type': 'sddcType',
'num_of_hosts': 'numOfHosts',
},
content_type='application/json'
)
# properties for post operation
post_input_type = type.StructType('operation-input', {
'org': type.StringType(),
})
post_error_dict = {
'com.vmware.vapi.std.errors.unauthenticated':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthenticated'),
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
}
post_input_value_validator_list = [
]
post_output_validator_list = [
]
post_rest_metadata = OperationRestMetadata(
http_method='POST',
url_template='/vmc/api/orgs/{org}/account-link/compatible-subnets',
path_variables={
'org': 'org',
},
query_parameters={
},
content_type='application/json'
)
operations = {
'get': {
'input_type': get_input_type,
'output_type': type.ReferenceType('com.vmware.vmc.model_client', 'AwsCompatibleSubnets'),
'errors': get_error_dict,
'input_value_validator_list': get_input_value_validator_list,
'output_validator_list': get_output_validator_list,
'task_type': TaskType.NONE,
},
'post': {
'input_type': post_input_type,
'output_type': type.ReferenceType('com.vmware.vmc.model_client', 'AwsSubnet'),
'errors': post_error_dict,
'input_value_validator_list': post_input_value_validator_list,
'output_validator_list': post_output_validator_list,
'task_type': TaskType.NONE,
},
}
rest_metadata = {
'get': get_rest_metadata,
'post': post_rest_metadata,
}
ApiInterfaceStub.__init__(
self, iface_name='com.vmware.vmc.orgs.account_link.compatible_subnets',
config=config, operations=operations, rest_metadata=rest_metadata,
is_vapi_rest=False)
class _CompatibleSubnetsAsyncStub(ApiInterfaceStub):
def __init__(self, config):
# properties for get operation
get_input_type = type.StructType('operation-input', {
'org': type.StringType(),
'linked_account_id': type.OptionalType(type.StringType()),
'region': type.OptionalType(type.StringType()),
'sddc': type.OptionalType(type.StringType()),
'instance_type': type.OptionalType(type.StringType()),
'sddc_type': type.OptionalType(type.StringType()),
'num_of_hosts': type.OptionalType(type.IntegerType()),
})
get_error_dict = {
'com.vmware.vapi.std.errors.unauthenticated':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthenticated'),
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
}
get_input_value_validator_list = [
]
get_output_validator_list = [
]
get_rest_metadata = OperationRestMetadata(
http_method='GET',
url_template='/vmc/api/orgs/{org}/account-link/compatible-subnets-async',
path_variables={
'org': 'org',
},
query_parameters={
'linked_account_id': 'linkedAccountId',
'region': 'region',
'sddc': 'sddc',
'instance_type': 'instanceType',
'sddc_type': 'sddcType',
'num_of_hosts': 'numOfHosts',
},
content_type='application/json'
)
# properties for post operation
post_input_type = type.StructType('operation-input', {
'aws_subnet': type.ReferenceType('com.vmware.vmc.model_client', 'AwsSubnet'),
'org': type.StringType(),
})
post_error_dict = {
'com.vmware.vapi.std.errors.unauthenticated':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthenticated'),
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
}
post_input_value_validator_list = [
]
post_output_validator_list = [
]
post_rest_metadata = OperationRestMetadata(
http_method='POST',
url_template='/vmc/api/orgs/{org}/account-link/compatible-subnets-async',
request_body_parameter='aws_subnet',
path_variables={
'org': 'org',
},
query_parameters={
},
content_type='application/json'
)
operations = {
'get': {
'input_type': get_input_type,
'output_type': type.ReferenceType('com.vmware.vmc.model_client', 'Task'),
'errors': get_error_dict,
'input_value_validator_list': get_input_value_validator_list,
'output_validator_list': get_output_validator_list,
'task_type': TaskType.NONE,
},
'post': {
'input_type': post_input_type,
'output_type': type.ReferenceType('com.vmware.vmc.model_client', 'Task'),
'errors': post_error_dict,
'input_value_validator_list': post_input_value_validator_list,
'output_validator_list': post_output_validator_list,
'task_type': TaskType.NONE,
},
}
rest_metadata = {
'get': get_rest_metadata,
'post': post_rest_metadata,
}
ApiInterfaceStub.__init__(
self, iface_name='com.vmware.vmc.orgs.account_link.compatible_subnets_async',
config=config, operations=operations, rest_metadata=rest_metadata,
is_vapi_rest=False)
class _ConnectedAccountsStub(ApiInterfaceStub):
def __init__(self, config):
# properties for delete operation
delete_input_type = type.StructType('operation-input', {
'org': type.StringType(),
'linked_account_path_id': type.StringType(),
'force_even_when_sddc_present': type.OptionalType(type.BooleanType()),
})
delete_error_dict = {
'com.vmware.vapi.std.errors.unauthenticated':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthenticated'),
'com.vmware.vapi.std.errors.invalid_request':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'InvalidRequest'),
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
}
delete_input_value_validator_list = [
]
delete_output_validator_list = [
]
delete_rest_metadata = OperationRestMetadata(
http_method='DELETE',
url_template='/vmc/api/orgs/{org}/account-link/connected-accounts/{linkedAccountPathId}',
path_variables={
'org': 'org',
'linked_account_path_id': 'linkedAccountPathId',
},
query_parameters={
'force_even_when_sddc_present': 'forceEvenWhenSddcPresent',
},
content_type='application/json'
)
# properties for get operation
get_input_type = type.StructType('operation-input', {
'org': type.StringType(),
'provider': type.OptionalType(type.StringType()),
})
get_error_dict = {
'com.vmware.vapi.std.errors.unauthenticated':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthenticated'),
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
}
get_input_value_validator_list = [
]
get_output_validator_list = [
]
get_rest_metadata = OperationRestMetadata(
http_method='GET',
url_template='/vmc/api/orgs/{org}/account-link/connected-accounts',
path_variables={
'org': 'org',
},
query_parameters={
'provider': 'provider',
},
content_type='application/json'
)
operations = {
'delete': {
'input_type': delete_input_type,
'output_type': type.ReferenceType('com.vmware.vmc.model_client', 'AwsCustomerConnectedAccount'),
'errors': delete_error_dict,
'input_value_validator_list': delete_input_value_validator_list,
'output_validator_list': delete_output_validator_list,
'task_type': TaskType.NONE,
},
'get': {
'input_type': get_input_type,
'output_type': type.ListType(type.ReferenceType('com.vmware.vmc.model_client', 'AwsCustomerConnectedAccount')),
'errors': get_error_dict,
'input_value_validator_list': get_input_value_validator_list,
'output_validator_list': get_output_validator_list,
'task_type': TaskType.NONE,
},
}
rest_metadata = {
'delete': delete_rest_metadata,
'get': get_rest_metadata,
}
ApiInterfaceStub.__init__(
self, iface_name='com.vmware.vmc.orgs.account_link.connected_accounts',
config=config, operations=operations, rest_metadata=rest_metadata,
is_vapi_rest=False)
class _MapCustomerZonesStub(ApiInterfaceStub):
def __init__(self, config):
# properties for post operation
post_input_type = type.StructType('operation-input', {
'org': type.StringType(),
'map_zones_request': type.ReferenceType('com.vmware.vmc.model_client', 'MapZonesRequest'),
})
post_error_dict = {
'com.vmware.vapi.std.errors.unauthenticated':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthenticated'),
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
}
post_input_value_validator_list = [
]
post_output_validator_list = [
]
post_rest_metadata = OperationRestMetadata(
http_method='POST',
url_template='/vmc/api/orgs/{org}/account-link/map-customer-zones',
request_body_parameter='map_zones_request',
path_variables={
'org': 'org',
},
query_parameters={
},
content_type='application/json'
)
operations = {
'post': {
'input_type': post_input_type,
'output_type': type.ReferenceType('com.vmware.vmc.model_client', 'Task'),
'errors': post_error_dict,
'input_value_validator_list': post_input_value_validator_list,
'output_validator_list': post_output_validator_list,
'task_type': TaskType.NONE,
},
}
rest_metadata = {
'post': post_rest_metadata,
}
ApiInterfaceStub.__init__(
self, iface_name='com.vmware.vmc.orgs.account_link.map_customer_zones',
config=config, operations=operations, rest_metadata=rest_metadata,
is_vapi_rest=False)
class _SddcConnectionsStub(ApiInterfaceStub):
def __init__(self, config):
# properties for get operation
get_input_type = type.StructType('operation-input', {
'org': type.StringType(),
'sddc': type.OptionalType(type.StringType()),
})
get_error_dict = {
'com.vmware.vapi.std.errors.unauthenticated':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthenticated'),
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
}
get_input_value_validator_list = [
]
get_output_validator_list = [
]
get_rest_metadata = OperationRestMetadata(
http_method='GET',
url_template='/vmc/api/orgs/{org}/account-link/sddc-connections',
path_variables={
'org': 'org',
},
query_parameters={
'sddc': 'sddc',
},
content_type='application/json'
)
operations = {
'get': {
'input_type': get_input_type,
'output_type': type.ListType(type.ReferenceType('com.vmware.vmc.model_client', 'AwsSddcConnection')),
'errors': get_error_dict,
'input_value_validator_list': get_input_value_validator_list,
'output_validator_list': get_output_validator_list,
'task_type': TaskType.NONE,
},
}
rest_metadata = {
'get': get_rest_metadata,
}
ApiInterfaceStub.__init__(
self, iface_name='com.vmware.vmc.orgs.account_link.sddc_connections',
config=config, operations=operations, rest_metadata=rest_metadata,
is_vapi_rest=False)
class StubFactory(StubFactoryBase):
_attrs = {
'CompatibleSubnets': CompatibleSubnets,
'CompatibleSubnetsAsync': CompatibleSubnetsAsync,
'ConnectedAccounts': ConnectedAccounts,
'MapCustomerZones': MapCustomerZones,
'SddcConnections': SddcConnections,
}
| 39.242991 | 127 | 0.575783 | 2,840 | 29,393 | 5.708099 | 0.087676 | 0.048856 | 0.040898 | 0.050336 | 0.831658 | 0.809512 | 0.779286 | 0.762137 | 0.745605 | 0.734008 | 0 | 0.000347 | 0.313408 | 29,393 | 748 | 128 | 39.295455 | 0.802933 | 0.242507 | 0 | 0.645963 | 1 | 0 | 0.239887 | 0.150779 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037267 | false | 0 | 0.024845 | 0 | 0.113872 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d5b5a95bb7befbe7db9ff76890efa892ab0b9dbe | 98 | py | Python | src/mot/object_detection/dataset/__init__.py | amartya-dev/mot | 6706860b10b524c4f90dfd98ec9b1f2433e4e09e | [
"MIT"
] | 22 | 2019-10-04T06:42:04.000Z | 2022-03-25T02:32:50.000Z | src/mot/object_detection/dataset/__init__.py | amartya-dev/mot | 6706860b10b524c4f90dfd98ec9b1f2433e4e09e | [
"MIT"
] | 43 | 2019-10-05T10:15:32.000Z | 2022-02-09T23:36:30.000Z | src/mot/object_detection/dataset/__init__.py | amartya-dev/mot | 6706860b10b524c4f90dfd98ec9b1f2433e4e09e | [
"MIT"
] | 15 | 2019-09-26T14:58:38.000Z | 2022-03-29T20:22:31.000Z | from mot.object_detection.dataset.dataset import *
from mot.object_detection.dataset.mot import *
| 32.666667 | 50 | 0.836735 | 14 | 98 | 5.714286 | 0.428571 | 0.175 | 0.325 | 0.55 | 0.725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 98 | 2 | 51 | 49 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.